[GitHub] [flink] leonardBang commented on pull request #15920: [FLINK-22661][hive] HiveInputFormatPartitionReader can return invalid…

2021-05-19 Thread GitBox


leonardBang commented on pull request #15920:
URL: https://github.com/apache/flink/pull/15920#issuecomment-844721068


   > Since we don't combine splits, there will be at least two splits to read.
   
   You're right that single split is ok because one single split won't do 
switch splits action. Thanks for the explanation.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15924: [FLINK-22670][FLIP-150][connector/common] Hybrid source baseline

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15924:
URL: https://github.com/apache/flink/pull/15924#issuecomment-841943851


   
   ## CI report:
   
   * 4529d29fc411304c78076e303eff3ebf81aa16ae Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18158)
 
   * 1685626df39ea331106f6b3f7d4f63f506a17f41 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18162)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844486999


   
   ## CI report:
   
   * 6546dc571af0d7f4c04e9e063a6f4b89892e031c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18159)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15924: [FLINK-22670][FLIP-150][connector/common] Hybrid source baseline

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15924:
URL: https://github.com/apache/flink/pull/15924#issuecomment-841943851


   
   ## CI report:
   
   * 14ebe6d069fff468bda4f4bad5ecd3bdeb43cdb0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18115)
 
   * 4529d29fc411304c78076e303eff3ebf81aa16ae Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18158)
 
   * 1685626df39ea331106f6b3f7d4f63f506a17f41 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18162)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15924: [FLINK-22670][FLIP-150][connector/common] Hybrid source baseline

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15924:
URL: https://github.com/apache/flink/pull/15924#issuecomment-841943851


   
   ## CI report:
   
   * 14ebe6d069fff468bda4f4bad5ecd3bdeb43cdb0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18115)
 
   * 4529d29fc411304c78076e303eff3ebf81aa16ae Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18158)
 
   * 1685626df39ea331106f6b3f7d4f63f506a17f41 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15768: [FLINK-22451][table] Support (*) as parameter of UDFs in Table API

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15768:
URL: https://github.com/apache/flink/pull/15768#issuecomment-826735938


   
   ## CI report:
   
   * dced81c2ffc8c59f9d9311346e71309129aa73cf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=17473)
 
   * 4f398eeff8439c9c4c052c157cb5360a7b784d8c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18161)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22366) HiveSinkCompactionITCase fails on azure

2021-05-19 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348054#comment-17348054
 ] 

Guowei Ma commented on FLINK-22366:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18153=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354=23742

> HiveSinkCompactionITCase fails on azure
> ---
>
> Key: FLINK-22366
> URL: https://issues.apache.org/jira/browse/FLINK-22366
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16818=logs=245e1f2e-ba5b-5570-d689-25ae21e5302f=e7f339b2-a7c3-57d9-00af-3712d4b15354=23420
> {code}
>  [ERROR] testNonPartition[format = 
> sequencefile](org.apache.flink.connectors.hive.HiveSinkCompactionITCase)  
> Time elapsed: 4.999 s  <<< FAILURE!
> Apr 19 22:25:10 java.lang.AssertionError: expected:<[+I[0, 0, 0], +I[0, 0, 
> 0], +I[1, 1, 1], +I[1, 1, 1], +I[2, 2, 2], +I[2, 2, 2], +I[3, 3, 3], +I[3, 3, 
> 3], +I[4, 4, 4], +I[4, 4, 4], +I[5, 5, 5], +I[5, 5, 5], +I[6, 6, 6], +I[6, 6, 
> 6], +I[7, 7, 7], +I[7, 7, 7], +I[8, 8, 8], +I[8, 8, 8], +I[9, 9, 9], +I[9, 9, 
> 9], +I[10, 0, 0], +I[10, 0, 0], +I[11, 1, 1], +I[11, 1, 1], +I[12, 2, 2], 
> +I[12, 2, 2], +I[13, 3, 3], +I[13, 3, 3], +I[14, 4, 4], +I[14, 4, 4], +I[15, 
> 5, 5], +I[15, 5, 5], +I[16, 6, 6], +I[16, 6, 6], +I[17, 7, 7], +I[17, 7, 7], 
> +I[18, 8, 8], +I[18, 8, 8], +I[19, 9, 9], +I[19, 9, 9], +I[20, 0, 0], +I[20, 
> 0, 0], +I[21, 1, 1], +I[21, 1, 1], +I[22, 2, 2], +I[22, 2, 2], +I[23, 3, 3], 
> +I[23, 3, 3], +I[24, 4, 4], +I[24, 4, 4], +I[25, 5, 5], +I[25, 5, 5], +I[26, 
> 6, 6], +I[26, 6, 6], +I[27, 7, 7], +I[27, 7, 7], +I[28, 8, 8], +I[28, 8, 8], 
> +I[29, 9, 9], +I[29, 9, 9], +I[30, 0, 0], +I[30, 0, 0], +I[31, 1, 1], +I[31, 
> 1, 1], +I[32, 2, 2], +I[32, 2, 2], +I[33, 3, 3], +I[33, 3, 3], +I[34, 4, 4], 
> +I[34, 4, 4], +I[35, 5, 5], +I[35, 5, 5], +I[36, 6, 6], +I[36, 6, 6], +I[37, 
> 7, 7], +I[37, 7, 7], +I[38, 8, 8], +I[38, 8, 8], +I[39, 9, 9], +I[39, 9, 9], 
> +I[40, 0, 0], +I[40, 0, 0], +I[41, 1, 1], +I[41, 1, 1], +I[42, 2, 2], +I[42, 
> 2, 2], +I[43, 3, 3], +I[43, 3, 3], +I[44, 4, 4], +I[44, 4, 4], +I[45, 5, 5], 
> +I[45, 5, 5], +I[46, 6, 6], +I[46, 6, 6], +I[47, 7, 7], +I[47, 7, 7], +I[48, 
> 8, 8], +I[48, 8, 8], +I[49, 9, 9], +I[49, 9, 9], +I[50, 0, 0], +I[50, 0, 0], 
> +I[51, 1, 1], +I[51, 1, 1], +I[52, 2, 2], +I[52, 2, 2], +I[53, 3, 3], +I[53, 
> 3, 3], +I[54, 4, 4], +I[54, 4, 4], +I[55, 5, 5], +I[55, 5, 5], +I[56, 6, 6], 
> +I[56, 6, 6], +I[57, 7, 7], +I[57, 7, 7], +I[58, 8, 8], +I[58, 8, 8], +I[59, 
> 9, 9], +I[59, 9, 9], +I[60, 0, 0], +I[60, 0, 0], +I[61, 1, 1], +I[61, 1, 1], 
> +I[62, 2, 2], +I[62, 2, 2], +I[63, 3, 3], +I[63, 3, 3], +I[64, 4, 4], +I[64, 
> 4, 4], +I[65, 5, 5], +I[65, 5, 5], +I[66, 6, 6], +I[66, 6, 6], +I[67, 7, 7], 
> +I[67, 7, 7], +I[68, 8, 8], +I[68, 8, 8], +I[69, 9, 9], +I[69, 9, 9], +I[70, 
> 0, 0], +I[70, 0, 0], +I[71, 1, 1], +I[71, 1, 1], +I[72, 2, 2], +I[72, 2, 2], 
> +I[73, 3, 3], +I[73, 3, 3], +I[74, 4, 4], +I[74, 4, 4], +I[75, 5, 5], +I[75, 
> 5, 5], +I[76, 6, 6], +I[76, 6, 6], +I[77, 7, 7], +I[77, 7, 7], +I[78, 8, 8], 
> +I[78, 8, 8], +I[79, 9, 9], +I[79, 9, 9], +I[80, 0, 0], +I[80, 0, 0], +I[81, 
> 1, 1], +I[81, 1, 1], +I[82, 2, 2], +I[82, 2, 2], +I[83, 3, 3], +I[83, 3, 3], 
> +I[84, 4, 4], +I[84, 4, 4], +I[85, 5, 5], +I[85, 5, 5], +I[86, 6, 6], +I[86, 
> 6, 6], +I[87, 7, 7], +I[87, 7, 7], +I[88, 8, 8], +I[88, 8, 8], +I[89, 9, 9], 
> +I[89, 9, 9], +I[90, 0, 0], +I[90, 0, 0], +I[91, 1, 1], +I[91, 1, 1], +I[92, 
> 2, 2], +I[92, 2, 2], +I[93, 3, 3], +I[93, 3, 3], +I[94, 4, 4], +I[94, 4, 4], 
> +I[95, 5, 5], +I[95, 5, 5], +I[96, 6, 6], +I[96, 6, 6], +I[97, 7, 7], +I[97, 
> 7, 7], +I[98, 8, 8], +I[98, 8, 8], +I[99, 9, 9], +I[99, 9, 9]]> but 
> was:<[+I[0, 0, 0], +I[1, 1, 1], +I[2, 2, 2], +I[3, 3, 3], +I[4, 4, 4], +I[5, 
> 5, 5], +I[6, 6, 6], +I[7, 7, 7], +I[8, 8, 8], +I[9, 9, 9], +I[10, 0, 0], 
> +I[11, 1, 1], +I[12, 2, 2], +I[13, 3, 3], +I[14, 4, 4], +I[15, 5, 5], +I[16, 
> 6, 6], +I[17, 7, 7], +I[18, 8, 8], +I[19, 9, 9], +I[20, 0, 0], +I[21, 1, 1], 
> +I[22, 2, 2], +I[23, 3, 3], +I[24, 4, 4], +I[25, 5, 5], +I[26, 6, 6], +I[27, 
> 7, 7], +I[28, 8, 8], +I[29, 9, 9], +I[30, 0, 0], +I[31, 1, 1], +I[32, 2, 2], 
> +I[33, 3, 3], +I[34, 4, 4], +I[35, 5, 5], +I[36, 6, 6], +I[37, 7, 7], +I[38, 
> 8, 8], +I[39, 9, 9], +I[40, 0, 0], +I[41, 1, 1], +I[42, 2, 2], +I[43, 3, 3], 
> +I[44, 4, 4], +I[45, 5, 5], +I[46, 6, 6], +I[47, 7, 7], +I[48, 8, 8], +I[49, 
> 9, 9], +I[50, 0, 0], +I[51, 1, 1], +I[52, 2, 2], +I[53, 3, 3], +I[54, 4, 4], 
> +I[55, 5, 5], +I[56, 6, 6], +I[57, 

[jira] [Comment Edited] (FLINK-22704) ZooKeeperHaServicesTest.testCleanupJobData failed

2021-05-19 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348049#comment-17348049
 ] 

Guowei Ma edited comment on FLINK-22704 at 5/20/21, 4:46 AM:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18151=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=0dbaca5d-7c38-52e6-f4fe-2fb69ccb3ada=8170

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18153=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=c2734c79-73b6-521c-e85a-67c7ecae9107=6768


was (Author: maguowei):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18151=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=0dbaca5d-7c38-52e6-f4fe-2fb69ccb3ada=8170

> ZooKeeperHaServicesTest.testCleanupJobData failed
> -
>
> Key: FLINK-22704
> URL: https://issues.apache.org/jira/browse/FLINK-22704
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Guowei Ma
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18108=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392=8172
> {code:java}
> May 19 01:30:02 Expected: a collection containing 
> "1a2850d5759a2e1f4fef9cc7e6abc675"
> May 19 01:30:02  but: was "resource_manager_lock"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22613) FlinkKinesisITCase.testStopWithSavepoint fails

2021-05-19 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348052#comment-17348052
 ] 

Guowei Ma commented on FLINK-22613:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18153=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20=14058

> FlinkKinesisITCase.testStopWithSavepoint fails
> --
>
> Key: FLINK-22613
> URL: https://issues.apache.org/jira/browse/FLINK-22613
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0, 1.14.0, 1.12.3
>Reporter: Guowei Ma
>Priority: Blocker
>  Labels: test-stability
>
> {code:java}
> 2021-05-10T03:09:18.4601182Z May 10 03:09:18 [ERROR] 
> testStopWithSavepoint(org.apache.flink.streaming.connectors.kinesis.FlinkKinesisITCase)
>   Time elapsed: 3.526 s  <<< FAILURE!
> 2021-05-10T03:09:18.4601884Z May 10 03:09:18 java.lang.AssertionError: 
> 2021-05-10T03:09:18.4605902Z May 10 03:09:18 
> 2021-05-10T03:09:18.4616154Z May 10 03:09:18 Expected: a collection with size 
> a value less than <10>
> 2021-05-10T03:09:18.4616818Z May 10 03:09:18  but: collection size <10> 
> was equal to <10>
> 2021-05-10T03:09:18.4618087Z May 10 03:09:18  at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> 2021-05-10T03:09:18.4618702Z May 10 03:09:18  at 
> org.junit.Assert.assertThat(Assert.java:956)
> 2021-05-10T03:09:18.4619467Z May 10 03:09:18  at 
> org.junit.Assert.assertThat(Assert.java:923)
> 2021-05-10T03:09:18.4620391Z May 10 03:09:18  at 
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisITCase.testStopWithSavepoint(FlinkKinesisITCase.java:126)
> 2021-05-10T03:09:18.4621115Z May 10 03:09:18  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-05-10T03:09:18.4621751Z May 10 03:09:18  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-05-10T03:09:18.4622475Z May 10 03:09:18  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-05-10T03:09:18.4623142Z May 10 03:09:18  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-05-10T03:09:18.4623783Z May 10 03:09:18  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-05-10T03:09:18.4624514Z May 10 03:09:18  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-05-10T03:09:18.4625246Z May 10 03:09:18  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-05-10T03:09:18.4625967Z May 10 03:09:18  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-05-10T03:09:18.4626671Z May 10 03:09:18  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-05-10T03:09:18.4627349Z May 10 03:09:18  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-05-10T03:09:18.4627979Z May 10 03:09:18  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-05-10T03:09:18.4628582Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-05-10T03:09:18.4629251Z May 10 03:09:18  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-05-10T03:09:18.4629950Z May 10 03:09:18  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-05-10T03:09:18.4630616Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-05-10T03:09:18.4631339Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-05-10T03:09:18.4631986Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-05-10T03:09:18.4632630Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-05-10T03:09:18.4633269Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-05-10T03:09:18.4634016Z May 10 03:09:18  at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> 2021-05-10T03:09:18.4634786Z May 10 03:09:18  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-05-10T03:09:18.4635412Z May 10 03:09:18  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-05-10T03:09:18.4635995Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2021-05-10T03:09:18.4636656Z May 10 03:09:18  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2021-05-10T03:09:18.4637398Z May 10 03:09:18  at 
> 

[jira] [Comment Edited] (FLINK-22692) CheckpointStoreITCase.testRestartOnRecoveryFailure fails with RuntimeException

2021-05-19 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17347278#comment-17347278
 ] 

Guowei Ma edited comment on FLINK-22692 at 5/20/21, 4:44 AM:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18106=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=a0a633b8-47ef-5c5a-2806-3c13b9e48228=4516

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18108=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=a0a633b8-47ef-5c5a-2806-3c13b9e48228=4510

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18151=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=a0a633b8-47ef-5c5a-2806-3c13b9e48228=4516


was (Author: maguowei):
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18106=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=a0a633b8-47ef-5c5a-2806-3c13b9e48228=4516

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18108=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=a0a633b8-47ef-5c5a-2806-3c13b9e48228=4510

> CheckpointStoreITCase.testRestartOnRecoveryFailure fails with RuntimeException
> --
>
> Key: FLINK-22692
> URL: https://issues.apache.org/jira/browse/FLINK-22692
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.0, 1.14.0, 1.12.4
>Reporter: Robert Metzger
>Assignee: Roman Khachatryan
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.14.0, 1.13.1, 1.12.5
>
>
> Not sure if it is related to the adaptive scheduler: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18052=logs=8fd9202e-fd17-5b26-353c-ac1ff76c8f28=a0a633b8-47ef-5c5a-2806-3c13b9e48228
> {code}
> May 17 22:29:11 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 1.351 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.CheckpointStoreITCase
> May 17 22:29:11 [ERROR] 
> testRestartOnRecoveryFailure(org.apache.flink.test.checkpointing.CheckpointStoreITCase)
>   Time elapsed: 1.138 s  <<< ERROR!
> May 17 22:29:11 org.apache.flink.runtime.client.JobExecutionException: Job 
> execution failed.
> May 17 22:29:11   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
> May 17 22:29:11   at 
> org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:137)
> May 17 22:29:11   at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> May 17 22:29:11   at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> May 17 22:29:11   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> May 17 22:29:11   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> May 17 22:29:11   at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:237)
> May 17 22:29:11   at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> May 17 22:29:11   at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> May 17 22:29:11   at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> May 17 22:29:11   at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> May 17 22:29:11   at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:1081)
> May 17 22:29:11   at akka.dispatch.OnComplete.internal(Future.scala:264)
> May 17 22:29:11   at akka.dispatch.OnComplete.internal(Future.scala:261)
> May 17 22:29:11   at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
> May 17 22:29:11   at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
> May 17 22:29:11   at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> May 17 22:29:11   at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:73)
> May 17 22:29:11   at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> May 17 22:29:11   at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> May 17 22:29:11   at 
> akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
> May 17 22:29:11   at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22720) UpsertKafkaTableITCase.testAggregate fail due to ConcurrentModificationException

2021-05-19 Thread Guowei Ma (Jira)
Guowei Ma created FLINK-22720:
-

 Summary: UpsertKafkaTableITCase.testAggregate fail due to 
ConcurrentModificationException
 Key: FLINK-22720
 URL: https://issues.apache.org/jira/browse/FLINK-22720
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Tests
Affects Versions: 1.14.0
Reporter: Guowei Ma


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18151=logs=c5612577-f1f7-5977-6ff6-7432788526f7=53f6305f-55e6-561c-8f1e-3a1dde2c77df=6613


{code:java}
2021-05-19T21:28:02.8689083Z May 19 21:28:02 [ERROR] testAggregate[format = 
avro](org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase) 
 Time elapsed: 2.067 s  <<< ERROR!
2021-05-19T21:28:02.8708337Z May 19 21:28:02 
java.util.ConcurrentModificationException
2021-05-19T21:28:02.8710333Z May 19 21:28:02at 
java.util.HashMap$HashIterator.nextNode(HashMap.java:1445)
2021-05-19T21:28:02.8712083Z May 19 21:28:02at 
java.util.HashMap$ValueIterator.next(HashMap.java:1474)
2021-05-19T21:28:02.8712680Z May 19 21:28:02at 
java.util.AbstractCollection.toArray(AbstractCollection.java:141)
2021-05-19T21:28:02.8713142Z May 19 21:28:02at 
java.util.ArrayList.addAll(ArrayList.java:583)
2021-05-19T21:28:02.8716029Z May 19 21:28:02at 
org.apache.flink.table.planner.factories.TestValuesRuntimeFunctions.lambda$getResults$0(TestValuesRuntimeFunctions.java:114)
2021-05-19T21:28:02.8717007Z May 19 21:28:02at 
java.util.HashMap$Values.forEach(HashMap.java:981)
2021-05-19T21:28:02.8718041Z May 19 21:28:02at 
org.apache.flink.table.planner.factories.TestValuesRuntimeFunctions.getResults(TestValuesRuntimeFunctions.java:114)
2021-05-19T21:28:02.8719339Z May 19 21:28:02at 
org.apache.flink.table.planner.factories.TestValuesTableFactory.getResults(TestValuesTableFactory.java:184)
2021-05-19T21:28:02.8720309Z May 19 21:28:02at 
org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestUtils.waitingExpectedResults(KafkaTableTestUtils.java:82)
2021-05-19T21:28:02.8721311Z May 19 21:28:02at 
org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.wordFreqToUpsertKafka(UpsertKafkaTableITCase.java:440)
2021-05-19T21:28:02.8730402Z May 19 21:28:02at 
org.apache.flink.streaming.connectors.kafka.table.UpsertKafkaTableITCase.testAggregate(UpsertKafkaTableITCase.java:73)
2021-05-19T21:28:02.8731390Z May 19 21:28:02at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2021-05-19T21:28:02.8732095Z May 19 21:28:02at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2021-05-19T21:28:02.8732935Z May 19 21:28:02at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2021-05-19T21:28:02.8733726Z May 19 21:28:02at 
java.lang.reflect.Method.invoke(Method.java:498)
2021-05-19T21:28:02.8734598Z May 19 21:28:02at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2021-05-19T21:28:02.8735450Z May 19 21:28:02at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2021-05-19T21:28:02.8736313Z May 19 21:28:02at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2021-05-19T21:28:02.8737329Z May 19 21:28:02at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2021-05-19T21:28:02.8738165Z May 19 21:28:02at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2021-05-19T21:28:02.8738989Z May 19 21:28:02at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
2021-05-19T21:28:02.8739741Z May 19 21:28:02at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2021-05-19T21:28:02.8740563Z May 19 21:28:02at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
2021-05-19T21:28:02.8741340Z May 19 21:28:02at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2021-05-19T21:28:02.8742077Z May 19 21:28:02at 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2021-05-19T21:28:02.8742802Z May 19 21:28:02at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2021-05-19T21:28:02.8743594Z May 19 21:28:02at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2021-05-19T21:28:02.8744811Z May 19 21:28:02at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2021-05-19T21:28:02.8745580Z May 19 21:28:02at 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2021-05-19T21:28:02.8746330Z May 19 21:28:02at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2021-05-19T21:28:02.8747222Z May 19 21:28:02at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2021-05-19T21:28:02.8748007Z May 19 21:28:02at 

[jira] [Commented] (FLINK-22702) KafkaSourceITCase.testRedundantParallelism failed

2021-05-19 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348050#comment-17348050
 ] 

Guowei Ma commented on FLINK-22702:
---

KafkaSourceITCase.testValueOnlyDeserializer fail because of same errors

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18151=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f=f266c805-9429-58ed-2f9e-482e7b82f58b=7009



> KafkaSourceITCase.testRedundantParallelism failed
> -
>
> Key: FLINK-22702
> URL: https://issues.apache.org/jira/browse/FLINK-22702
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.12.3
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18107=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19=6847
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
>   at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   ... 1 more
> Caused by: java.lang.IllegalStateException: Consumer is not subscribed to any 
> topics or assigned any partitions
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1223)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.fetch(KafkaPartitionSplitReader.java:97)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:56)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:138)
>   ... 6 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22702) KafkaSourceITCase.testRedundantParallelism failed

2021-05-19 Thread Guowei Ma (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guowei Ma updated FLINK-22702:
--
Affects Version/s: 1.14.0

> KafkaSourceITCase.testRedundantParallelism failed
> -
>
> Key: FLINK-22702
> URL: https://issues.apache.org/jira/browse/FLINK-22702
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.14.0, 1.12.3
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18107=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19=6847
> {code:java}
> Caused by: java.lang.RuntimeException: One or more fetchers have encountered 
> exception
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcherManager.checkErrors(SplitFetcherManager.java:199)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:154)
>   at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
>   at 
> org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:275)
>   at 
> org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:67)
>   at 
> org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:398)
>   at 
> org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:191)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:619)
>   at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:583)
>   at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:758)
>   at org.apache.flink.runtime.taskmanager.Task.run(Task.java:573)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: java.lang.RuntimeException: SplitFetcher thread 0 received 
> unexpected exception while polling the records
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:146)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:101)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   ... 1 more
> Caused by: java.lang.IllegalStateException: Consumer is not subscribed to any 
> topics or assigned any partitions
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1223)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader.fetch(KafkaPartitionSplitReader.java:97)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:56)
>   at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:138)
>   ... 6 more
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on a change in pull request #15944: [FLINK-22694][e2e] Use sql file in TPCH end to end tests

2021-05-19 Thread GitBox


wuchong commented on a change in pull request #15944:
URL: https://github.com/apache/flink/pull/15944#discussion_r635740045



##
File path: flink-end-to-end-tests/test-scripts/test_tpch.sh
##
@@ -51,29 +51,27 @@ ORIGIN_QUERY_DIR="$TARGET_DIR/query"
 MODIFIED_QUERY_DIR="$TPCH_DATA_DIR/modified-query"
 EXPECTED_DIR="$TARGET_DIR/expected"
 RESULT_DIR="$TEST_DATA_DIR/result"
-SQL_CONF="$TEST_DATA_DIR/sql-client-session.conf"
+INIT_SQL="$TEST_DATA_DIR/init_table.sql"
 
 mkdir "$RESULT_DIR"
 
-SOURCES_YAML=$(cat "$TPCH_DATA_DIR/source.yaml")
-SOURCES_YAML=${SOURCES_YAML//\$TABLE_DIR/"$TABLE_DIR"}
+SOURCES_SQL=$(cat "$TPCH_DATA_DIR/source.sql")
+SOURCES_SQL=${SOURCES_SQL//\$TABLE_DIR/"$TABLE_DIR"}
 
 for i in {1..22}
 do
 echo "Running query #$i..."
 
 # First line in sink yaml is ignored

Review comment:
   update the comment. 

##
File path: 
flink-end-to-end-tests/flink-tpch-test/src/main/java/org/apache/flink/table/tpch/TpchResultComparator.java
##
@@ -81,7 +93,10 @@ public static void main(String[] args) throws IOException {
 failed = (e * 0.99 > t || e * 1.01 < t);
 }
 } catch (NumberFormatException nfe2) {
-failed = 
!expected[i].trim().equals(actual[i].trim());
+failed =
+!expected[i]
+.trim()
+.equals(actual[i].replaceAll("\"", 
"").trim());

Review comment:
   Why there are quotes on numeric?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22704) ZooKeeperHaServicesTest.testCleanupJobData failed

2021-05-19 Thread Guowei Ma (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guowei Ma updated FLINK-22704:
--
Affects Version/s: 1.14.0

> ZooKeeperHaServicesTest.testCleanupJobData failed
> -
>
> Key: FLINK-22704
> URL: https://issues.apache.org/jira/browse/FLINK-22704
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Guowei Ma
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18108=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392=8172
> {code:java}
> May 19 01:30:02 Expected: a collection containing 
> "1a2850d5759a2e1f4fef9cc7e6abc675"
> May 19 01:30:02  but: was "resource_manager_lock"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22704) ZooKeeperHaServicesTest.testCleanupJobData failed

2021-05-19 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348049#comment-17348049
 ] 

Guowei Ma commented on FLINK-22704:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18151=logs=f0ac5c25-1168-55a5-07ff-0e88223afed9=0dbaca5d-7c38-52e6-f4fe-2fb69ccb3ada=8170

> ZooKeeperHaServicesTest.testCleanupJobData failed
> -
>
> Key: FLINK-22704
> URL: https://issues.apache.org/jira/browse/FLINK-22704
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Assignee: Till Rohrmann
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18108=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392=8172
> {code:java}
> May 19 01:30:02 Expected: a collection containing 
> "1a2850d5759a2e1f4fef9cc7e6abc675"
> May 19 01:30:02  but: was "resource_manager_lock"
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22593) SavepointITCase.testShouldAddEntropyToSavepointPath unstable

2021-05-19 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348047#comment-17348047
 ] 

Guowei Ma commented on FLINK-22593:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18151=logs=39d5b1d5-3b41-54dc-6458-1e2ddd1cdcf3=a99e99c7-21cd-5a1f-7274-585e62b72f56=4481

> SavepointITCase.testShouldAddEntropyToSavepointPath unstable
> 
>
> Key: FLINK-22593
> URL: https://issues.apache.org/jira/browse/FLINK-22593
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.14.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=9072=logs=cc649950-03e9-5fae-8326-2f1ad744b536=51cab6ca-669f-5dc0-221d-1e4f7dc4fc85
> {code}
> 2021-05-07T10:56:20.9429367Z May 07 10:56:20 [ERROR] Tests run: 13, Failures: 
> 0, Errors: 1, Skipped: 0, Time elapsed: 33.441 s <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.SavepointITCase
> 2021-05-07T10:56:20.9445862Z May 07 10:56:20 [ERROR] 
> testShouldAddEntropyToSavepointPath(org.apache.flink.test.checkpointing.SavepointITCase)
>   Time elapsed: 2.083 s  <<< ERROR!
> 2021-05-07T10:56:20.9447106Z May 07 10:56:20 
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> triggering task Sink: Unnamed (3/4) of job 4e155a20f0a7895043661a6446caf1cb 
> has not being executed at the moment. Aborting checkpoint. Failure reason: 
> Not all required tasks are currently running.
> 2021-05-07T10:56:20.9448194Z May 07 10:56:20  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2021-05-07T10:56:20.9448797Z May 07 10:56:20  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2021-05-07T10:56:20.9449428Z May 07 10:56:20  at 
> org.apache.flink.test.checkpointing.SavepointITCase.submitJobAndTakeSavepoint(SavepointITCase.java:305)
> 2021-05-07T10:56:20.9450160Z May 07 10:56:20  at 
> org.apache.flink.test.checkpointing.SavepointITCase.testShouldAddEntropyToSavepointPath(SavepointITCase.java:273)
> 2021-05-07T10:56:20.9450785Z May 07 10:56:20  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-05-07T10:56:20.9451331Z May 07 10:56:20  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-05-07T10:56:20.9451940Z May 07 10:56:20  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-05-07T10:56:20.9452498Z May 07 10:56:20  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-05-07T10:56:20.9453247Z May 07 10:56:20  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-05-07T10:56:20.9454007Z May 07 10:56:20  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-05-07T10:56:20.9454687Z May 07 10:56:20  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-05-07T10:56:20.9455302Z May 07 10:56:20  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-05-07T10:56:20.9455909Z May 07 10:56:20  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-05-07T10:56:20.9456493Z May 07 10:56:20  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-05-07T10:56:20.9457074Z May 07 10:56:20  at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
> 2021-05-07T10:56:20.9457636Z May 07 10:56:20  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2021-05-07T10:56:20.9458157Z May 07 10:56:20  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-05-07T10:56:20.9458678Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-05-07T10:56:20.9459252Z May 07 10:56:20  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-05-07T10:56:20.9459865Z May 07 10:56:20  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-05-07T10:56:20.9460433Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-05-07T10:56:20.9461058Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-05-07T10:56:20.9461607Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-05-07T10:56:20.9462159Z May 07 10:56:20  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-05-07T10:56:20.9462705Z May 07 10:56:20  at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #15768: [FLINK-22451][table] Support (*) as parameter of UDFs in Table API

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15768:
URL: https://github.com/apache/flink/pull/15768#issuecomment-826735938


   
   ## CI report:
   
   * dced81c2ffc8c59f9d9311346e71309129aa73cf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=17473)
 
   * 4f398eeff8439c9c4c052c157cb5360a7b784d8c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21952) Make all the "Connection reset by peer" exception wrapped as RemoteTransportException

2021-05-19 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348041#comment-17348041
 ] 

Yun Gao commented on FLINK-21952:
-

Hi [~pnowojski] Very sorry for missing the notification and response later.  I 
think it works since the exception might be caught here are all thrown by the 
downstream task Netty stack, and their kinds are limited, thus directly 
distinguish them via keywords should be enough.

> Make all the "Connection reset by peer" exception wrapped as 
> RemoteTransportException
> -
>
> Key: FLINK-21952
> URL: https://issues.apache.org/jira/browse/FLINK-21952
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Reporter: Yun Gao
>Priority: Major
>  Labels: stale-major
>
> In CreditBasedPartitionRequestClientHandler#exceptionCaught, the IOException 
> or the exception with exact message "Connection reset by peer" are marked as 
> RemoteTransportException. 
> However, with the current Netty implementation, sometimes it might throw 
> {code:java}
> org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors$NativeIoException:
>  readAddress(..) failed: Connection reset by peer
> {code}
> in some case. It would be also wrapped as LocalTransportException, which 
> might cause some confusion. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache commented on pull request #15920: [FLINK-22661][hive] HiveInputFormatPartitionReader can return invalid…

2021-05-19 Thread GitBox


lirui-apache commented on pull request #15920:
URL: https://github.com/apache/flink/pull/15920#issuecomment-844671552


   > Thanks @lirui-apache for the contribution, LGTM.
   > The bug also exists in single split and the test you used to mock reader 
for lookup also is single split. It will be great if you can update the PR 
description.
   
   Hi @leonardBang , thanks for reviewing. I don't think single split has 
issue. The test I add uses multiple splits. Note that in `prepareData` I run 
INSERT twice so that at least two files are generated. Since we don't combine 
splits, there will be at least two splits to read.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844486999


   
   ## CI report:
   
   * a72b9147bc1f30b72a54ae0beff7515019d39326 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18154)
 
   * efc26ca0fa1d3d92438e1ffb00c464d20543781a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18156)
 
   * 6546dc571af0d7f4c04e9e063a6f4b89892e031c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18159)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22719) WindowJoinUtil.containsWindowStartEqualityAndEndEquality should not throw exception

2021-05-19 Thread Andy (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348032#comment-17348032
 ] 

Andy commented on FLINK-22719:
--

[~lzljs3620320] I agree with you, If join has two window nodes as input nodes, 
however join condition does not satisfy window join condition restrict,  it 
could fall back to regular join.

> WindowJoinUtil.containsWindowStartEqualityAndEndEquality should not throw 
> exception
> ---
>
> Key: FLINK-22719
> URL: https://issues.apache.org/jira/browse/FLINK-22719
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jingsong Lee
>Priority: Major
> Fix For: 1.14.0
>
>
> This will broke regular join sql.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22719) WindowJoinUtil.containsWindowStartEqualityAndEndEquality should not throw exception

2021-05-19 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-22719:


 Summary: WindowJoinUtil.containsWindowStartEqualityAndEndEquality 
should not throw exception
 Key: FLINK-22719
 URL: https://issues.apache.org/jira/browse/FLINK-22719
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: Jingsong Lee
 Fix For: 1.14.0


This will broke regular join sql.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15927: [FLINK-22639][runtime] ClassLoaderUtil cannot print classpath of Flin…

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15927:
URL: https://github.com/apache/flink/pull/15927#issuecomment-842054909


   
   ## CI report:
   
   * 760d7a3b2f4d2cc87871f228316f1f2bf70fe8ae Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18112)
 
   * ce1bf535ffd28ea2b7fc10264a184f05e8980aed Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18160)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22718) Could not create actor system

2021-05-19 Thread HYUNHOO KWON (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

HYUNHOO KWON updated FLINK-22718:
-
Summary: Could not create actor system  (was: java.lang.Exception: Could 
not create actor system)

> Could not create actor system
> -
>
> Key: FLINK-22718
> URL: https://issues.apache.org/jira/browse/FLINK-22718
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.12.3
>Reporter: HYUNHOO KWON
>Priority: Major
>
> * Java Flink local test failure (Could not create actor system). Only on 
> 1.12.3v.
>  ** No issue on 1.11.1v, 1.12.2v and 1.13.0v
>  * Error Message
>  ** 
> {code:java}
> java.lang.Exception: Could not create actor systemjava.lang.Exception: Could 
> not create actor system at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startLocalActorSystem(BootstrapTools.java:281)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils$AkkaRpcServiceBuilder.createAndStart(AkkaRpcServiceUtils.java:361)
>  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils$AkkaRpcServiceBuilder.createAndStart(AkkaRpcServiceUtils.java:344)
>  at 
> org.apache.flink.runtime.minicluster.MiniCluster.createLocalRpcService(MiniCluster.java:952)
>  at 
> org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:288) 
> at 
> org.apache.flink.client.program.PerJobMiniClusterFactory.submitJob(PerJobMiniClusterFactory.java:75)
>  at 
> org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:85)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1905)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1796)
>  at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1782)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1765)
>  at 
> com.kakao.bi.ka.conversion.job.KaConversion.streamKaConversion(KaConversion.java:56)
>  at 
> com.kakao.bi.ka.conversion.KaConversionApplication.main(KaConversionApplication.java:9)Caused
>  by: java.lang.NoClassDefFoundError: akka/actor/ExtensionId$class at 
> org.apache.flink.runtime.akka.RemoteAddressExtension$.(RemoteAddressExtension.scala:32)
>  at 
> org.apache.flink.runtime.akka.RemoteAddressExtension$.(RemoteAddressExtension.scala)
>  at org.apache.flink.runtime.akka.AkkaUtils$.getAddress(AkkaUtils.scala:804) 
> at org.apache.flink.runtime.akka.AkkaUtils.getAddress(AkkaUtils.scala) at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:298)
>  at 
> org.apache.flink.runtime.clusterframework.BootstrapTools.startLocalActorSystem(BootstrapTools.java:279)
>  ... 13 moreCaused by: java.lang.ClassNotFoundException: 
> akka.actor.ExtensionId$class at 
> java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
> sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
> java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 19 more
> {code}
>  * dependency
>  ** 
> {code:java}
> ext {
> flinkScalaVersion = '2.12'
> flinkVersion = '1.12.3'
> }{code}
>  ** 
> {code:java}
> dependencies {
> compileOnly 'org.projectlombok:lombok:1.18.8'
> annotationProcessor 'org.projectlombok:lombok:1.18.8'
> compile group: 'com.typesafe', name: 'config', version: '1.3.4'
> compile group: 'org.apache.httpcomponents', name: 'httpclient', version: 
> '4.5.10'
> compile group: 'org.apache.flink', name: 'flink-java', version: 
> "$flinkVersion"
> compile group: 'org.apache.flink', name: 
> "flink-streaming-java_$flinkScalaVersion", version: "$flinkVersion"
> compile group: 'org.apache.flink', name: 
> "flink-clients_$flinkScalaVersion", version: "$flinkVersion"
> compile group: 'org.apache.flink', name: 
> "flink-connector-kafka_$flinkScalaVersion", version: "$flinkVersion"
> compile group: 'org.apache.flink', name: 
> "flink-connector-elasticsearch7_$flinkScalaVersion", version: "$flinkVersion"
> testCompile group: 'junit', name: 'junit', version: '4.12'
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-22718) java.lang.Exception: Could not create actor system

2021-05-19 Thread HYUNHOO KWON (Jira)
HYUNHOO KWON created FLINK-22718:


 Summary: java.lang.Exception: Could not create actor system
 Key: FLINK-22718
 URL: https://issues.apache.org/jira/browse/FLINK-22718
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.12.3
Reporter: HYUNHOO KWON


* Java Flink local test failure (Could not create actor system). Only on 
1.12.3v.
 ** No issue on 1.11.1v, 1.12.2v and 1.13.0v
 * Error Message
 ** 
{code:java}
java.lang.Exception: Could not create actor systemjava.lang.Exception: Could 
not create actor system at 
org.apache.flink.runtime.clusterframework.BootstrapTools.startLocalActorSystem(BootstrapTools.java:281)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils$AkkaRpcServiceBuilder.createAndStart(AkkaRpcServiceUtils.java:361)
 at 
org.apache.flink.runtime.rpc.akka.AkkaRpcServiceUtils$AkkaRpcServiceBuilder.createAndStart(AkkaRpcServiceUtils.java:344)
 at 
org.apache.flink.runtime.minicluster.MiniCluster.createLocalRpcService(MiniCluster.java:952)
 at 
org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:288) at 
org.apache.flink.client.program.PerJobMiniClusterFactory.submitJob(PerJobMiniClusterFactory.java:75)
 at 
org.apache.flink.client.deployment.executors.LocalExecutor.execute(LocalExecutor.java:85)
 at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.executeAsync(StreamExecutionEnvironment.java:1905)
 at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1796)
 at 
org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
 at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1782)
 at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1765)
 at 
com.kakao.bi.ka.conversion.job.KaConversion.streamKaConversion(KaConversion.java:56)
 at 
com.kakao.bi.ka.conversion.KaConversionApplication.main(KaConversionApplication.java:9)Caused
 by: java.lang.NoClassDefFoundError: akka/actor/ExtensionId$class at 
org.apache.flink.runtime.akka.RemoteAddressExtension$.(RemoteAddressExtension.scala:32)
 at 
org.apache.flink.runtime.akka.RemoteAddressExtension$.(RemoteAddressExtension.scala)
 at org.apache.flink.runtime.akka.AkkaUtils$.getAddress(AkkaUtils.scala:804) at 
org.apache.flink.runtime.akka.AkkaUtils.getAddress(AkkaUtils.scala) at 
org.apache.flink.runtime.clusterframework.BootstrapTools.startActorSystem(BootstrapTools.java:298)
 at 
org.apache.flink.runtime.clusterframework.BootstrapTools.startLocalActorSystem(BootstrapTools.java:279)
 ... 13 moreCaused by: java.lang.ClassNotFoundException: 
akka.actor.ExtensionId$class at 
java.net.URLClassLoader.findClass(URLClassLoader.java:382) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:424) at 
sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at 
java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 19 more
{code}

 * dependency
 ** 
{code:java}
ext {
flinkScalaVersion = '2.12'
flinkVersion = '1.12.3'
}{code}

 ** 
{code:java}
dependencies {
compileOnly 'org.projectlombok:lombok:1.18.8'
annotationProcessor 'org.projectlombok:lombok:1.18.8'
compile group: 'com.typesafe', name: 'config', version: '1.3.4'
compile group: 'org.apache.httpcomponents', name: 'httpclient', version: 
'4.5.10'
compile group: 'org.apache.flink', name: 'flink-java', version: 
"$flinkVersion"
compile group: 'org.apache.flink', name: 
"flink-streaming-java_$flinkScalaVersion", version: "$flinkVersion"
compile group: 'org.apache.flink', name: 
"flink-clients_$flinkScalaVersion", version: "$flinkVersion"
compile group: 'org.apache.flink', name: 
"flink-connector-kafka_$flinkScalaVersion", version: "$flinkVersion"
compile group: 'org.apache.flink', name: 
"flink-connector-elasticsearch7_$flinkScalaVersion", version: "$flinkVersion"
testCompile group: 'junit', name: 'junit', version: '4.12'
}
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15927: [FLINK-22639][runtime] ClassLoaderUtil cannot print classpath of Flin…

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15927:
URL: https://github.com/apache/flink/pull/15927#issuecomment-842054909


   
   ## CI report:
   
   * 760d7a3b2f4d2cc87871f228316f1f2bf70fe8ae Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18112)
 
   * ce1bf535ffd28ea2b7fc10264a184f05e8980aed UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21803) Tumbling / Sliding windows are unaware of daylight savings time

2021-05-19 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-21803:
---
Fix Version/s: (was: 1.13.0)

> Tumbling / Sliding windows are unaware of daylight savings time
> ---
>
> Key: FLINK-21803
> URL: https://issues.apache.org/jira/browse/FLINK-21803
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.12.2
>Reporter: Craig Smoothey
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> It is currently possible to specify an "offset" for tumbling / sliding 
> windows. The offset is however immutable. This creates a problem for 
> aggregations that have to be performed in a timezone which utilises daylight 
> savings time for half of the year. For example. If one is aggregating data by 
> day in the New York time zone, then for half of the year, the offset is 5 
> hours (relative UTC) and for the other half of the year, the offset is 4 
> hours (relative UTC). There is no way to construct tumbling / sliding windows 
> to specify daylight savings time behaviour. It would be helpful if there was 
> a constructor for tumbling / sliding windows to specify the timezone that the 
> aggregation must be performed in (default = UTC). The tumbling / sliding 
> window would then be required to automatically change the offset depending on 
> whether daylight savings time is active or not for the specified time zone. 
> My application is using the DataStream API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-21803) Tumbling / Sliding windows are unaware of daylight savings time

2021-05-19 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348019#comment-17348019
 ] 

Leonard Xu edited comment on FLINK-21803 at 5/20/21, 2:34 AM:
--

Since flink 1.13 you can define time attribute on TIMESTAMP_LTZ column and then 
the window calculation will consider the DST time.

[1][[https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/timezone/#daylight-saving-time-support]
 

 

The feature only supported in Table/SQL, I reopen this ticket because the issue 
description is about DataStream case.


was (Author: leonard xu):
Since flink 1.13 you can define time attribute on TIMESTAMP_LTZ column and then 
the window calculation will consider the DST time.

[1][[https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/timezone/#daylight-saving-time-support]
 

> Tumbling / Sliding windows are unaware of daylight savings time
> ---
>
> Key: FLINK-21803
> URL: https://issues.apache.org/jira/browse/FLINK-21803
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.12.2
>Reporter: Craig Smoothey
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.13.0
>
>
> It is currently possible to specify an "offset" for tumbling / sliding 
> windows. The offset is however immutable. This creates a problem for 
> aggregations that have to be performed in a timezone which utilises daylight 
> savings time for half of the year. For example. If one is aggregating data by 
> day in the New York time zone, then for half of the year, the offset is 5 
> hours (relative UTC) and for the other half of the year, the offset is 4 
> hours (relative UTC). There is no way to construct tumbling / sliding windows 
> to specify daylight savings time behaviour. It would be helpful if there was 
> a constructor for tumbling / sliding windows to specify the timezone that the 
> aggregation must be performed in (default = UTC). The tumbling / sliding 
> window would then be required to automatically change the offset depending on 
> whether daylight savings time is active or not for the specified time zone. 
> My application is using the DataStream API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Reopened] (FLINK-21803) Tumbling / Sliding windows are unaware of daylight savings time

2021-05-19 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu reopened FLINK-21803:


> Tumbling / Sliding windows are unaware of daylight savings time
> ---
>
> Key: FLINK-21803
> URL: https://issues.apache.org/jira/browse/FLINK-21803
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.12.2
>Reporter: Craig Smoothey
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.13.0
>
>
> It is currently possible to specify an "offset" for tumbling / sliding 
> windows. The offset is however immutable. This creates a problem for 
> aggregations that have to be performed in a timezone which utilises daylight 
> savings time for half of the year. For example. If one is aggregating data by 
> day in the New York time zone, then for half of the year, the offset is 5 
> hours (relative UTC) and for the other half of the year, the offset is 4 
> hours (relative UTC). There is no way to construct tumbling / sliding windows 
> to specify daylight savings time behaviour. It would be helpful if there was 
> a constructor for tumbling / sliding windows to specify the timezone that the 
> aggregation must be performed in (default = UTC). The tumbling / sliding 
> window would then be required to automatically change the offset depending on 
> whether daylight savings time is active or not for the specified time zone. 
> My application is using the DataStream API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-21803) Tumbling / Sliding windows are unaware of daylight savings time

2021-05-19 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21803?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348019#comment-17348019
 ] 

Leonard Xu edited comment on FLINK-21803 at 5/20/21, 2:32 AM:
--

Since flink 1.13 you can define time attribute on TIMESTAMP_LTZ column and then 
the window calculation will consider the DST time.

[1][[https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/table/concepts/timezone/#daylight-saving-time-support]
 


was (Author: leonard xu):
Since flink 1.13 you can define time attribute on TIMESTAMP_LTZ column and then 
the window calculation will consider the DST time.

[1][https://ci.apache.org/projects/flink/flink-docs-release-]1.13/docs/dev/table/concepts/timezone/#daylight-saving-time-support

> Tumbling / Sliding windows are unaware of daylight savings time
> ---
>
> Key: FLINK-21803
> URL: https://issues.apache.org/jira/browse/FLINK-21803
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.12.2
>Reporter: Craig Smoothey
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.13.0
>
>
> It is currently possible to specify an "offset" for tumbling / sliding 
> windows. The offset is however immutable. This creates a problem for 
> aggregations that have to be performed in a timezone which utilises daylight 
> savings time for half of the year. For example. If one is aggregating data by 
> day in the New York time zone, then for half of the year, the offset is 5 
> hours (relative UTC) and for the other half of the year, the offset is 4 
> hours (relative UTC). There is no way to construct tumbling / sliding windows 
> to specify daylight savings time behaviour. It would be helpful if there was 
> a constructor for tumbling / sliding windows to specify the timezone that the 
> aggregation must be performed in (default = UTC). The tumbling / sliding 
> window would then be required to automatically change the offset depending on 
> whether daylight savings time is active or not for the specified time zone. 
> My application is using the DataStream API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-21803) Tumbling / Sliding windows are unaware of daylight savings time

2021-05-19 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21803?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu resolved FLINK-21803.

Fix Version/s: 1.13.0
   Resolution: Implemented

Since flink 1.13 you can define time attribute on TIMESTAMP_LTZ column and then 
the window calculation will consider the DST time.

[1][https://ci.apache.org/projects/flink/flink-docs-release-]1.13/docs/dev/table/concepts/timezone/#daylight-saving-time-support

> Tumbling / Sliding windows are unaware of daylight savings time
> ---
>
> Key: FLINK-21803
> URL: https://issues.apache.org/jira/browse/FLINK-21803
> Project: Flink
>  Issue Type: Improvement
>  Components: API / DataStream
>Affects Versions: 1.12.2
>Reporter: Craig Smoothey
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.13.0
>
>
> It is currently possible to specify an "offset" for tumbling / sliding 
> windows. The offset is however immutable. This creates a problem for 
> aggregations that have to be performed in a timezone which utilises daylight 
> savings time for half of the year. For example. If one is aggregating data by 
> day in the New York time zone, then for half of the year, the offset is 5 
> hours (relative UTC) and for the other half of the year, the offset is 4 
> hours (relative UTC). There is no way to construct tumbling / sliding windows 
> to specify daylight savings time behaviour. It would be helpful if there was 
> a constructor for tumbling / sliding windows to specify the timezone that the 
> aggregation must be performed in (default = UTC). The tumbling / sliding 
> window would then be required to automatically change the offset depending on 
> whether daylight savings time is active or not for the specified time zone. 
> My application is using the DataStream API.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] paul8263 commented on pull request #15927: [FLINK-22639][runtime] ClassLoaderUtil cannot print classpath of Flin…

2021-05-19 Thread GitBox


paul8263 commented on pull request #15927:
URL: https://github.com/apache/flink/pull/15927#issuecomment-844635880


   Thank you @zentol . I reworked those codes in ce1bf53 .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20495) Elasticsearch6DynamicSinkITCase Hang

2021-05-19 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20495?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348016#comment-17348016
 ] 

Guowei Ma commented on FLINK-20495:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18152=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=bfbc6239-57a0-5db0-63f3-41551b4f7d51=12229

> Elasticsearch6DynamicSinkITCase Hang
> 
>
> Key: FLINK-20495
> URL: https://issues.apache.org/jira/browse/FLINK-20495
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Connectors / 
> ElasticSearch, Tests
>Affects Versions: 1.13.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10535=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20]
>  
> {code:java}
> 2020-12-04T22:39:33.9748225Z [INFO] Running 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase
> 2020-12-04T22:54:51.9486410Z 
> ==
> 2020-12-04T22:54:51.9488766Z Process produced no output for 900 seconds.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22613) FlinkKinesisITCase.testStopWithSavepoint fails

2021-05-19 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348015#comment-17348015
 ] 

Guowei Ma commented on FLINK-22613:
---

1.12
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18152=logs=e1276d0f-df12-55ec-86b5-c0ad597d83c9=906e9244-f3be-5604-1979-e767c8a6f6d9=13977

> FlinkKinesisITCase.testStopWithSavepoint fails
> --
>
> Key: FLINK-22613
> URL: https://issues.apache.org/jira/browse/FLINK-22613
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kinesis
>Affects Versions: 1.13.0, 1.14.0, 1.12.3
>Reporter: Guowei Ma
>Priority: Blocker
>  Labels: test-stability
>
> {code:java}
> 2021-05-10T03:09:18.4601182Z May 10 03:09:18 [ERROR] 
> testStopWithSavepoint(org.apache.flink.streaming.connectors.kinesis.FlinkKinesisITCase)
>   Time elapsed: 3.526 s  <<< FAILURE!
> 2021-05-10T03:09:18.4601884Z May 10 03:09:18 java.lang.AssertionError: 
> 2021-05-10T03:09:18.4605902Z May 10 03:09:18 
> 2021-05-10T03:09:18.4616154Z May 10 03:09:18 Expected: a collection with size 
> a value less than <10>
> 2021-05-10T03:09:18.4616818Z May 10 03:09:18  but: collection size <10> 
> was equal to <10>
> 2021-05-10T03:09:18.4618087Z May 10 03:09:18  at 
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:20)
> 2021-05-10T03:09:18.4618702Z May 10 03:09:18  at 
> org.junit.Assert.assertThat(Assert.java:956)
> 2021-05-10T03:09:18.4619467Z May 10 03:09:18  at 
> org.junit.Assert.assertThat(Assert.java:923)
> 2021-05-10T03:09:18.4620391Z May 10 03:09:18  at 
> org.apache.flink.streaming.connectors.kinesis.FlinkKinesisITCase.testStopWithSavepoint(FlinkKinesisITCase.java:126)
> 2021-05-10T03:09:18.4621115Z May 10 03:09:18  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2021-05-10T03:09:18.4621751Z May 10 03:09:18  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2021-05-10T03:09:18.4622475Z May 10 03:09:18  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2021-05-10T03:09:18.4623142Z May 10 03:09:18  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2021-05-10T03:09:18.4623783Z May 10 03:09:18  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2021-05-10T03:09:18.4624514Z May 10 03:09:18  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2021-05-10T03:09:18.4625246Z May 10 03:09:18  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2021-05-10T03:09:18.4625967Z May 10 03:09:18  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2021-05-10T03:09:18.4626671Z May 10 03:09:18  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2021-05-10T03:09:18.4627349Z May 10 03:09:18  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-05-10T03:09:18.4627979Z May 10 03:09:18  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-05-10T03:09:18.4628582Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2021-05-10T03:09:18.4629251Z May 10 03:09:18  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2021-05-10T03:09:18.4629950Z May 10 03:09:18  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2021-05-10T03:09:18.4630616Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2021-05-10T03:09:18.4631339Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2021-05-10T03:09:18.4631986Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2021-05-10T03:09:18.4632630Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2021-05-10T03:09:18.4633269Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2021-05-10T03:09:18.4634016Z May 10 03:09:18  at 
> org.testcontainers.containers.FailureDetectingExternalResource$1.evaluate(FailureDetectingExternalResource.java:30)
> 2021-05-10T03:09:18.4634786Z May 10 03:09:18  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2021-05-10T03:09:18.4635412Z May 10 03:09:18  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2021-05-10T03:09:18.4635995Z May 10 03:09:18  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2021-05-10T03:09:18.4636656Z May 10 03:09:18  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2021-05-10T03:09:18.4637398Z May 10 03:09:18  at 
> 

[jira] [Updated] (FLINK-20498) SQLClientSchemaRegistryITCase.testReading test timed out after 120000 milliseconds

2021-05-19 Thread Guowei Ma (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guowei Ma updated FLINK-20498:
--
Affects Version/s: 1.12.3

> SQLClientSchemaRegistryITCase.testReading test timed out after 12 
> milliseconds
> --
>
> Key: FLINK-20498
> URL: https://issues.apache.org/jira/browse/FLINK-20498
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.1, 1.13.0, 1.12.3
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10548=logs=739e6eac-8312-5d31-d437-294c4d26fced=a68b8d89-50e9-5977-4500-f4fde4f57f9b]
> {code:java}
> 2020-12-06T02:06:38.6416440Z Dec 06 02:06:38 
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
> 2020-12-06T02:06:38.6417052Z Dec 06 02:06:38  at java.lang.Object.wait(Native 
> Method)
> 2020-12-06T02:06:38.6417586Z Dec 06 02:06:38  at 
> java.lang.Thread.join(Thread.java:1252)
> 2020-12-06T02:06:38.6418170Z Dec 06 02:06:38  at 
> java.lang.Thread.join(Thread.java:1326)
> 2020-12-06T02:06:38.6418788Z Dec 06 02:06:38  at 
> org.apache.kafka.clients.admin.KafkaAdminClient.close(KafkaAdminClient.java:541)
> 2020-12-06T02:06:38.6419463Z Dec 06 02:06:38  at 
> org.apache.kafka.clients.admin.Admin.close(Admin.java:96)
> 2020-12-06T02:06:38.6420277Z Dec 06 02:06:38  at 
> org.apache.kafka.clients.admin.Admin.close(Admin.java:79)
> 2020-12-06T02:06:38.6420973Z Dec 06 02:06:38  at 
> org.apache.flink.tests.util.kafka.KafkaContainerClient.createTopic(KafkaContainerClient.java:76)
> 2020-12-06T02:06:38.6421797Z Dec 06 02:06:38  at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading(SQLClientSchemaRegistryITCase.java:109)
> 2020-12-06T02:06:38.6422517Z Dec 06 02:06:38  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-12-06T02:06:38.6423173Z Dec 06 02:06:38  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-12-06T02:06:38.6423990Z Dec 06 02:06:38  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-12-06T02:06:38.6424656Z Dec 06 02:06:38  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-12-06T02:06:38.6425321Z Dec 06 02:06:38  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-12-06T02:06:38.6426057Z Dec 06 02:06:38  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-12-06T02:06:38.6426766Z Dec 06 02:06:38  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-12-06T02:06:38.6427478Z Dec 06 02:06:38  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-12-06T02:06:38.6428232Z Dec 06 02:06:38  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-12-06T02:06:38.6428999Z Dec 06 02:06:38  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-12-06T02:06:38.6429707Z Dec 06 02:06:38  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-12-06T02:06:38.6430292Z Dec 06 02:06:38  at 
> java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20498) SQLClientSchemaRegistryITCase.testReading test timed out after 120000 milliseconds

2021-05-19 Thread Guowei Ma (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348013#comment-17348013
 ] 

Guowei Ma commented on FLINK-20498:
---

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=18152=logs=6caf31d6-847a-526e-9624-468e053467d6=0b23652f-b18b-5b6e-6eb6-a11070364610=17745


> SQLClientSchemaRegistryITCase.testReading test timed out after 12 
> milliseconds
> --
>
> Key: FLINK-20498
> URL: https://issues.apache.org/jira/browse/FLINK-20498
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.1, 1.13.0
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10548=logs=739e6eac-8312-5d31-d437-294c4d26fced=a68b8d89-50e9-5977-4500-f4fde4f57f9b]
> {code:java}
> 2020-12-06T02:06:38.6416440Z Dec 06 02:06:38 
> org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
> 2020-12-06T02:06:38.6417052Z Dec 06 02:06:38  at java.lang.Object.wait(Native 
> Method)
> 2020-12-06T02:06:38.6417586Z Dec 06 02:06:38  at 
> java.lang.Thread.join(Thread.java:1252)
> 2020-12-06T02:06:38.6418170Z Dec 06 02:06:38  at 
> java.lang.Thread.join(Thread.java:1326)
> 2020-12-06T02:06:38.6418788Z Dec 06 02:06:38  at 
> org.apache.kafka.clients.admin.KafkaAdminClient.close(KafkaAdminClient.java:541)
> 2020-12-06T02:06:38.6419463Z Dec 06 02:06:38  at 
> org.apache.kafka.clients.admin.Admin.close(Admin.java:96)
> 2020-12-06T02:06:38.6420277Z Dec 06 02:06:38  at 
> org.apache.kafka.clients.admin.Admin.close(Admin.java:79)
> 2020-12-06T02:06:38.6420973Z Dec 06 02:06:38  at 
> org.apache.flink.tests.util.kafka.KafkaContainerClient.createTopic(KafkaContainerClient.java:76)
> 2020-12-06T02:06:38.6421797Z Dec 06 02:06:38  at 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase.testReading(SQLClientSchemaRegistryITCase.java:109)
> 2020-12-06T02:06:38.6422517Z Dec 06 02:06:38  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-12-06T02:06:38.6423173Z Dec 06 02:06:38  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-12-06T02:06:38.6423990Z Dec 06 02:06:38  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-12-06T02:06:38.6424656Z Dec 06 02:06:38  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-12-06T02:06:38.6425321Z Dec 06 02:06:38  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-12-06T02:06:38.6426057Z Dec 06 02:06:38  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-12-06T02:06:38.6426766Z Dec 06 02:06:38  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-12-06T02:06:38.6427478Z Dec 06 02:06:38  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-12-06T02:06:38.6428232Z Dec 06 02:06:38  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-12-06T02:06:38.6428999Z Dec 06 02:06:38  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-12-06T02:06:38.6429707Z Dec 06 02:06:38  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-12-06T02:06:38.6430292Z Dec 06 02:06:38  at 
> java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22698) RabbitMQ source does not stop unless message arrives in queue

2021-05-19 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17348011#comment-17348011
 ] 

Nicholas Jiang commented on FLINK-22698:


[~austince], thanks for tracking this issue. From this ML thread, the 
discussion is in the obstruct progress. IMO, we could fix the IsEndStream 
method to solve the above problem possiblely.

> RabbitMQ source does not stop unless message arrives in queue
> -
>
> Key: FLINK-22698
> URL: https://issues.apache.org/jira/browse/FLINK-22698
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors/ RabbitMQ
>Affects Versions: 1.12.0
>Reporter: Austin Cawley-Edwards
>Priority: Major
> Attachments: taskmanager_thread_dump.json
>
>
> In a streaming job with multiple RMQSources, a stop-with-savepoint request 
> has unexpected behavior. Regular checkpoints and savepoints complete 
> successfully, it is only the stop-with-savepoint request where this behavior 
> is seen.
>  
> *Expected Behavior:*
> The stop-with-savepoint request stops the job with a FINISHED state.
>  
> *Actual Behavior:*
> The stop-with-savepoint request either times out or hangs indefinitely unless 
> a message arrives in all the queues that the job consumes from after the 
> stop-with-savepoint request is made.
>  
> *Current workaround:*
> Send a sentinel value to each of the queues consumed by the job that the 
> deserialization schema checks in its isEndOfStream method. This is cumbersome 
> and makes it difficult to do stateful upgrades, as coordination with another 
> system is now necessary. 
>  
>  
> The TaskManager thread dump is attached.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844486999


   
   ## CI report:
   
   * a72b9147bc1f30b72a54ae0beff7515019d39326 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18154)
 
   * efc26ca0fa1d3d92438e1ffb00c464d20543781a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18156)
 
   * 6546dc571af0d7f4c04e9e063a6f4b89892e031c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18159)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15924: [FLINK-22670][FLIP-150][connector/common] Hybrid source baseline

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15924:
URL: https://github.com/apache/flink/pull/15924#issuecomment-841943851


   
   ## CI report:
   
   * 14ebe6d069fff468bda4f4bad5ecd3bdeb43cdb0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18115)
 
   * 4529d29fc411304c78076e303eff3ebf81aa16ae Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18158)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15812: remove slotpoolImpl

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15812:
URL: https://github.com/apache/flink/pull/15812#issuecomment-829441551


   
   ## CI report:
   
   * fa9018ec27b571fa64b2a5301926e2e663db0a75 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18157)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15962: [WIP] [FLINK-22326] Fix Iterator Operation leading to checkpoint failure

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15962:
URL: https://github.com/apache/flink/pull/15962#issuecomment-844547552


   
   ## CI report:
   
   * 41505bab4cbfae474b07519657c8d0383f4bdd0a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18155)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15924: [FLINK-22670][FLIP-150][connector/common] Hybrid source baseline

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15924:
URL: https://github.com/apache/flink/pull/15924#issuecomment-841943851


   
   ## CI report:
   
   * 14ebe6d069fff468bda4f4bad5ecd3bdeb43cdb0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18115)
 
   * 4529d29fc411304c78076e303eff3ebf81aa16ae UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15812: remove slotpoolImpl

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15812:
URL: https://github.com/apache/flink/pull/15812#issuecomment-829441551


   
   ## CI report:
   
   * 42cb97338993cfc7ca4ac179a66517719d6981c8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18150)
 
   * fa9018ec27b571fa64b2a5301926e2e663db0a75 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] FuyaoLi2017 commented on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


FuyaoLi2017 commented on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844606216


   cc @zentol @sjwiesman  Hello All, I think this PR should be ready. Please 
review.
   In addition, I have a side question, there are many third party dependencies 
over the places of flink source code. For example, kubernetes part uses 
fabric8io library. 
(https://github.com/apache/flink/blob/master/flink-kubernetes/pom.xml#L73)
   
   The NOTICE file doesn't contain these information. Could you explain a bit?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844486999


   
   ## CI report:
   
   * a72b9147bc1f30b72a54ae0beff7515019d39326 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18154)
 
   * efc26ca0fa1d3d92438e1ffb00c464d20543781a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18156)
 
   * 6546dc571af0d7f4c04e9e063a6f4b89892e031c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844486999


   
   ## CI report:
   
   * a72b9147bc1f30b72a54ae0beff7515019d39326 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18154)
 
   * efc26ca0fa1d3d92438e1ffb00c464d20543781a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


sjwiesman commented on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844578456


   Yes


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] FuyaoLi2017 commented on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


FuyaoLi2017 commented on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844571777


   
   
   
   > @zentol because the syntax highlighting comes from a file generated by 
Hugo but which is actually chroma under the hood
   > 
   > The file is only called GitHub css because that’s the syntax theme
   > 
   > https://github.com/apache/flink/blob/master/docs/assets/github.css
   
   @sjwiesman Do you think we need to add the chroma to the NOTICE file in MIT 
license part? Something like this? 
   
   ```
   - chroma (css generated by Hugo) (https://github.com/alecthomas/chroma) 
Copyright (C) 2017 Alec Thomas
   -> in "docs/assets/github.css"
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


sjwiesman commented on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844566891


   @zentol because the syntax highlighting comes from a file generated by Hugo 
but which is actually chroma under the hood
   
   The file is only called GitHub css because that’s the syntax theme
   
   https://github.com/apache/flink/blob/master/docs/assets/github.css
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] FuyaoLi2017 edited a comment on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


FuyaoLi2017 edited a comment on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844565249


   > @sjwiesman The licenses directory contains a file for chroma, but this 
isn't referenced in the NOTICE. What's up with that?
   
   This seems to be related to HUGO.
   I found a explanation README here 
(https://github.com/gohugoio/hugo/blob/master/docs/content/en/commands/hugo_gen_chromastyles.md)
   
   Maybe this is also considered as transitive dependencies? 
   However, this indeed exists in the flink code as a css file. 
(https://github.com/apache/flink/blob/master/docs/assets/github.css)
   
   If this really need to be added to this notice file. Please suggest me the 
statement.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] FuyaoLi2017 edited a comment on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


FuyaoLi2017 edited a comment on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844565249


   > @sjwiesman The licenses directory contains a file for chroma, but this 
isn't referenced in the NOTICE. What's up with that?
   
   This seems to be related to HUGO.
   I found a explanation README here 
(https://github.com/gohugoio/hugo/blob/master/docs/content/en/commands/hugo_gen_chromastyles.md)
   
   Maybe this is also considered as transitive dependencies? 
   This indeed exists in the flink code as a css file. 
(https://github.com/apache/flink/blob/master/docs/assets/github.css)
   
   If this really need to be added to this notice file. Please suggest me the 
statement.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] FuyaoLi2017 commented on pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


FuyaoLi2017 commented on pull request #15961:
URL: https://github.com/apache/flink/pull/15961#issuecomment-844565249


   > @sjwiesman The licenses directory contains a file for chroma, but this 
isn't referenced in the NOTICE. What's up with that?
   
   This seems to be related to HUGO.
   I found a explanation README here 
(https://github.com/gohugoio/hugo/blob/master/docs/content/en/commands/hugo_gen_chromastyles.md)
   
   Maybe this is also considered as transitive dependencies? 
   This indeed exists in the flink code as a css file. 
(https://github.com/apache/flink/blob/master/docs/assets/github.css)
   
   If this need to be added to this notice file. Please suggest me the 
statement.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22326) Job contains Iterate Operator always fails on Checkpoint

2021-05-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-22326:
---
Labels: pull-request-available  (was: )

> Job contains Iterate Operator always fails on Checkpoint 
> -
>
> Key: FLINK-22326
> URL: https://issues.apache.org/jira/browse/FLINK-22326
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.1
>Reporter: Lu Niu
>Priority: Major
>  Labels: pull-request-available
> Attachments: Screen Shot 2021-04-16 at 12.40.34 PM.png, Screen Shot 
> 2021-04-16 at 12.43.38 PM.png
>
>
> Job contains Iterate Operator will always fail on checkpoint.
> How to reproduce: 
> [https://gist.github.com/qqibrow/f297babadb0bb662ee398b9088870785]
> this is based on 
> [https://github.com/apache/flink/blob/master/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/iteration/IterateExample.java,]
>  but a few line difference:
>  1. Make maxWaitTime large enough when create IterativeStream
> 2. No output back to Itertive Source
> Result:
> The same code is able to checkpoint in 1.9.1
> !Screen Shot 2021-04-16 at 12.43.38 PM.png!
>  
> but always fail on checkpoint in 1.11
> !Screen Shot 2021-04-16 at 12.40.34 PM.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15962: [WIP] [FLINK-22326] Fix Iterator Operation leading to checkpoint failure

2021-05-19 Thread GitBox


flinkbot edited a comment on pull request #15962:
URL: https://github.com/apache/flink/pull/15962#issuecomment-844547552


   
   ## CI report:
   
   * 41505bab4cbfae474b07519657c8d0383f4bdd0a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=18155)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] FuyaoLi2017 commented on a change in pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


FuyaoLi2017 commented on a change in pull request #15961:
URL: https://github.com/apache/flink/pull/15961#discussion_r635636746



##
File path: NOTICE
##
@@ -28,8 +22,9 @@ See bundled license files for details.
 This project bundles the following dependencies under SIL OFL 1.1 license 
(https://opensource.org/licenses/OFL-1.1).
 See bundled license files for details.
 
-- font-awesome:4.5.0 (Font) (http://fortawesome.github.io/Font-Awesome/) - 
Created by Dave Gandy
--> fonts in "docs/page/font-awesome/fonts"
+- font-awesome:4.6.3 (Font) (https://fontawesome.com/) - Created by Dave Gandy
+-> css in "docs/static/font-awesome/css"

Review comment:
   Thanks for pointing out. I will update the css file in the MIT part. I 
think the version of css is 4.6.3. Is there a version for the fonts? I suppose 
they are updated in the same PR and they should all belong to version 4.6.3.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21928) DuplicateJobSubmissionException after JobManager failover

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21928:
---
Labels: stale-critical  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> DuplicateJobSubmissionException after JobManager failover
> -
>
> Key: FLINK-21928
> URL: https://issues.apache.org/jira/browse/FLINK-21928
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.10.3, 1.11.3, 1.12.2, 1.13.0
> Environment: StandaloneApplicationClusterEntryPoint using a fixed job 
> ID, High Availability enabled
>Reporter: Ufuk Celebi
>Priority: Critical
>  Labels: stale-critical
> Fix For: 1.14.0
>
>
> Consider the following scenario:
>  * Environment: StandaloneApplicationClusterEntryPoint using a fixed job ID, 
> high availability enabled
>  * Flink job reaches a globally terminal state
>  * Flink job is marked as finished in the high-availability service's 
> RunningJobsRegistry
>  * The JobManager fails over
> On recovery, the [Dispatcher throws DuplicateJobSubmissionException, because 
> the job is marked as done in the 
> RunningJobsRegistry|https://github.com/apache/flink/blob/release-1.12.2/flink-runtime/src/main/java/org/apache/flink/runtime/dispatcher/Dispatcher.java#L332-L340].
> When this happens, users cannot get out of the situation without manually 
> redeploying the JobManager process and changing the job ID^1^.
> The desired semantics are that we don't want to re-execute a job that has 
> reached a globally terminal state. In this particular case, we know that the 
> job has already reached such a state (as it has been marked in the registry). 
> Therefore, we could handle this case by executing the regular termination 
> sequence instead of throwing a DuplicateJobSubmission.
> ---
> ^1^ With ZooKeeper HA, the respective node is not ephemeral. In Kubernetes 
> HA, there is no  notion of ephemeral data that is tied to a session in the 
> first place afaik.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] FuyaoLi2017 commented on a change in pull request #15961: [FLINK-22706][release]Update License information in NOTICE file

2021-05-19 Thread GitBox


FuyaoLi2017 commented on a change in pull request #15961:
URL: https://github.com/apache/flink/pull/15961#discussion_r635636746



##
File path: NOTICE
##
@@ -28,8 +22,9 @@ See bundled license files for details.
 This project bundles the following dependencies under SIL OFL 1.1 license 
(https://opensource.org/licenses/OFL-1.1).
 See bundled license files for details.
 
-- font-awesome:4.5.0 (Font) (http://fortawesome.github.io/Font-Awesome/) - 
Created by Dave Gandy
--> fonts in "docs/page/font-awesome/fonts"
+- font-awesome:4.6.3 (Font) (https://fontawesome.com/) - Created by Dave Gandy
+-> css in "docs/static/font-awesome/css"

Review comment:
   Thanks for pointing out. I will update the css file in the MIT part. I 
think the version of css is 4.6.3. Is there a version for the fonts?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19144) Error when writing to partitioned table with s3 FileSystem

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-19144:
---
Labels: auto-unassigned stale-critical  (was: auto-unassigned)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> Error when writing to partitioned table with s3 FileSystem
> --
>
> Key: FLINK-19144
> URL: https://issues.apache.org/jira/browse/FLINK-19144
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Affects Versions: 1.11.1
>Reporter: Pei He
>Priority: Critical
>  Labels: auto-unassigned, stale-critical
> Fix For: 1.11.4
>
>
> It looks like HadoopFileSystemFactory is created in 
> [https://github.com/apache/flink/blob/master/flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/connectors/hive/HiveTableSink.java#L134]
> However, it cannot recognize s3 files system implementations which are based 
> on org.apache.flink.core.fs.FileSystemFactory.
>  
> {code:java}
> Caused by: java.io.IOException: No FileSystem for scheme: s3Caused by: 
> java.io.IOException: No FileSystem for scheme: s3 at 
> org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799) 
> ~[hadoop-common-2.8.3.jar:?] at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2810) 
> ~[hadoop-common-2.8.3.jar:?] at 
> org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100) 
> ~[hadoop-common-2.8.3.jar:?] at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2849) 
> ~[hadoop-common-2.8.3.jar:?] at 
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2831) 
> ~[hadoop-common-2.8.3.jar:?] at 
> org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389) 
> ~[hadoop-common-2.8.3.jar:?] at 
> org.apache.hadoop.fs.Path.getFileSystem(Path.java:356) 
> ~[hadoop-common-2.8.3.jar:?] at 
> org.apache.flink.connectors.hive.HadoopFileSystemFactory.create(HadoopFileSystemFactory.java:46)
>  ~[flink-connector-hive_2.11-1.11.1.jar:1.11.1] at 
> org.apache.flink.table.filesystem.stream.StreamingFileCommitter.lambda$initializeState$0(Hive.java:125)
>  ~[flink-table-blink_2.11-1.11.1.jar:1.11.1]
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21519) SQLClientHBaseITCase hangs on azure

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21519:
---
Labels: auto-unassigned stale-critical test-stability  (was: 
auto-unassigned test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> SQLClientHBaseITCase hangs on azure
> ---
>
> Key: FLINK-21519
> URL: https://issues.apache.org/jira/browse/FLINK-21519
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Table SQL / Client
>Affects Versions: 1.12.2, 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
>  Labels: auto-unassigned, stale-critical, test-stability
>
> https://dev.azure.com/wysakowiczdawid/Flink/_build/results?buildId=707=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=9401bf33-03c4-5a24-83fe-e51d75db73ef
> {code}
> Feb 26 13:58:15 [INFO] --- maven-surefire-plugin:2.22.1:test 
> (end-to-end-tests) @ flink-end-to-end-tests-hbase ---
> Feb 26 13:58:15 [INFO] 
> Feb 26 13:58:15 [INFO] ---
> Feb 26 13:58:15 [INFO]  T E S T S
> Feb 26 13:58:15 [INFO] ---
> Feb 26 13:58:16 [INFO] Running 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> ==
> === WARNING: This E2E Run will time out in the next few minutes. Starting to 
> upload the log output ===
> ==
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19248) The main method caused an error: No result found for job, was execute() called before getting the result

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-19248:
---
Labels: auto-unassigned pull-request-available stale-critical  (was: 
auto-unassigned pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> The main method caused an error: No result found for job, was execute() 
> called before getting the result
> 
>
> Key: FLINK-19248
> URL: https://issues.apache.org/jira/browse/FLINK-19248
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataSet, API / DataStream
>Affects Versions: 1.11.1
>Reporter: Shiyu Jin
>Priority: Critical
>  Labels: auto-unassigned, pull-request-available, stale-critical
>
> *[_Gelly_]* *The main method caused an error: No result found for job, was 
> execute() called before getting the result?*
> I download 
> [flink-1.11.1-bin-scala_2.12.tgz|http://apache.mirrors.pair.com/flink/flink-1.11.1/flink-1.11.1-bin-scala_2.12.tgz]
>  from the official site of flink, then do as
>  [Running Gelly 
> Examples|https://ci.apa%20che.org/projects/flink/flink-docs-release-1.11/dev/libs/gelly/#running-gelly-examples]
>  says to try the pagerank algorithm and hit the problem above,  the details 
> are shown as below (you can reproduce the error if you follow the steps)
> {code:bash}
> [corona@cas dist ]$ tar -xf flink-1.11.1-bin-scala_2.12.tgz
> [corona@cas dist ]$ cd flink-1.11.1
> [corona@cas flink-1.11.1]$ cp -v opt/flink-gelly*.jar lib  # it copies two 
> gelly jars
> 'opt/flink-gelly_2.12-1.11.1.jar' -> 'lib/flink-gelly_2.12-1.11.1.jar'
> 'opt/flink-gelly-scala_2.12-1.11.1.jar' -> 
> 'lib/flink-gelly-scala_2.12-1.11.1.jar'
>  [corona@cas flink-1.11.1]$ ./bin/start-cluster.sh
>  Starting cluster.
>  Starting standalonesession daemon on host cas.
>  Starting taskexecutor daemon on host cas.
>  [corona@cas flink-1.11.1]$ ./bin/flink run 
> examples/gelly/flink-gelly-examples_2.12-1.11.1.jar --algorithm PageRank 
> --input StarGraph --vertex_count 5 --output Print
>  Job has been submitted with JobID f867abf1d2cd94d07a419591e41b63a5
>  Program execution finished
>  Job with JobID f867abf1d2cd94d07a419591e41b63a5 has finished.
>  Job Runtime: 1647 ms
>  Accumulator Results:
>  - 6907b5f63ee1f31af9715772ddcff154-collect (java.util.ArrayList) [5 elements]
>  # ERROR messages show up
>  
>   org.apache.flink.client.program.ProgramInvocationException: The main method 
> caused an error: No result found for job, was execute() called before getting 
> the result?
>  at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:302)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>  at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:149)
>  at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:699)
>  at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:232)
>  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:916)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:992)
>  at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:992)
>Caused by: java.lang.NullPointerException: No result found for job, was 
> execute() called before getting the result?
>  at 
> org.apache.flink.util.Preconditions.checkNotNull(Preconditions.java:75)
>  at 
> org.apache.flink.graph.AnalyticHelper.getAccumulator(AnalyticHelper.java:81)
>  at org.apache.flink.graph.asm.dataset.Collect.getResult(Collect.java:62)
>  at org.apache.flink.graph.asm.dataset.Collect.getResult(Collect.java:35)
>  at 
> org.apache.flink.graph.asm.dataset.DataSetAnalyticBase.execute(DataSetAnalyticBase.java:56)
>  at org.apache.flink.graph.drivers.output.Print.write(Print.java:48)
>  at org.apache.flink.graph.Runner.execute(Runner.java:454)
>  at org.apache.flink.graph.Runner.main(Runner.java:507)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>  at 
> 

[jira] [Updated] (FLINK-18356) Exit code 137 returned from process

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18356:
---
  Labels: auto-deprioritized-critical pull-request-available test-stability 
 (was: pull-request-available stale-critical test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> Exit code 137 returned from process
> ---
>
> Key: FLINK-18356
> URL: https://issues.apache.org/jira/browse/FLINK-18356
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Tests
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Piotr Nowojski
>Priority: Major
>  Labels: auto-deprioritized-critical, pull-request-available, 
> test-stability
> Fix For: 1.14.0
>
>
> {noformat}
> = test session starts 
> ==
> platform linux -- Python 3.7.3, pytest-5.4.3, py-1.8.2, pluggy-0.13.1
> cachedir: .tox/py37-cython/.pytest_cache
> rootdir: /__w/3/s/flink-python
> collected 568 items
> pyflink/common/tests/test_configuration.py ..[  
> 1%]
> pyflink/common/tests/test_execution_config.py ...[  
> 5%]
> pyflink/dataset/tests/test_execution_environment.py .
> ##[error]Exit code 137 returned from process: file name '/bin/docker', 
> arguments 'exec -i -u 1002 
> 97fc4e22522d2ced1f4d23096b8929045d083dd0a99a4233a8b20d0489e9bddb 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> Finishing: Test - python
> {noformat}
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3729=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20374) Wrong result when shuffling changelog stream on non-primary-key columns

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20374:
---
Labels: stale-critical  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> Wrong result when shuffling changelog stream on non-primary-key columns
> ---
>
> Key: FLINK-20374
> URL: https://issues.apache.org/jira/browse/FLINK-20374
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Priority: Critical
>  Labels: stale-critical
> Fix For: 1.14.0
>
>
> This is reported from user-zh ML: 
> http://apache-flink.147419.n8.nabble.com/flink-1-11-2-cdc-cdc-sql-sink-save-point-job-sink-td8593.html
> {code:sql}
> CREATE TABLE test (
> `id` INT,
> `name` VARCHAR(255),
> `time` TIMESTAMP(3),
> `status` INT,
> PRIMARY KEY(id) NOT ENFORCED
> ) WITH (
>   'connector' = 'mysql-cdc',
>   'hostname' = 'localhost',
>   'port' = '3306',
>   'username' = 'root',
>   'password' = '1',
>   'database-name' = 'ai_audio_lyric_task',
>   'table-name' = 'test'
> )
> CREATE TABLE status (
> `id` INT,
> `name` VARCHAR(255),
> PRIMARY KEY(id) NOT ENFORCED
> ) WITH (  
>   'connector' = 'mysql-cdc',
>   'hostname' = 'localhost',
>   'port' = '3306',
>   'username' = 'root',
>   'password' = '1',
>   'database-name' = 'ai_audio_lyric_task',
>   'table-name' = 'status'
> );
> -- output
> CREATE TABLE test_status (
> `id` INT,
> `name` VARCHAR(255),
> `time` TIMESTAMP(3),
> `status` INT,
> `status_name` VARCHAR(255)
> PRIMARY KEY(id) NOT ENFORCED
> ) WITH (
>   'connector' = 'elasticsearch-7',
>   'hosts' = 'xxx',
>   'index' = 'xxx',
>   'username' = 'xxx',
>   'password' = 'xxx',
>   'sink.bulk-flush.backoff.max-retries' = '10',
>   'sink.bulk-flush.backoff.strategy' = 'CONSTANT',
>   'sink.bulk-flush.max-actions' = '5000',
>   'sink.bulk-flush.max-size' = '10mb',
>   'sink.bulk-flush.interval' = '1s'
> );
> INSERT into test_status
> SELECT t.*, s.name
> FROM test AS t
> LEFT JOIN status AS s ON t.status = s.id;
> {code}
> Data in mysql table:
> {code}
> test:
> 0, name0, 2020-07-06 00:00:00 , 0
> 1, name1, 2020-07-06 00:00:00 , 1
> 2, name2, 2020-07-06 00:00:00 , 1
> .
> status
> 0, status0
> 1, status1
> 2, status2
> .
> {code}
> Operations: 
> 1. start job with paralleslim=40, result in test_status sink is correct: 
> {code}
> 0, name0, 2020-07-06 00:00:00 , 0, status0
> 1, name1, 2020-07-06 00:00:00 , 1, status1
> 2, name2, 2020-07-06 00:00:00 , 1, status1
> {code}
> 2. Update {{status}} of {{id=2}} record in table {{test}} from {{1}} to {{2}}.
> 3. Result is not correct because the {{id=2}} record is missing in the 
> result. 
> The reason is that it shuffles the changelog {{test}} on {{status}} column 
> which is not the primary key. Therefore, the ordering can't be guaranteed, 
> and the result is wrong. 
> The {{-U[2, name2, 2020-07-06 00:00:00 , 1]}} and {{+U[2, name2, 2020-07-06 
> 00:00:00 , 2]}} will possible be shuffled to different join task, so the 
> order of joined results  is not guaranteed when they arrive to the sink task. 
> It is possbile  {{+U[2, name2, 2020-07-06 00:00:00 , status2]}} arrives 
> first, and then {{-U[2, name2, 2020-07-06 00:00:00 , status1]}} , then the 
> {{id=2}} record is missing in Elasticsearch. 
> It seems that we need a changelog ordering mechanism in the planner. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17400) LocalStandaloneKafkaResource.setupKafkaDist fails due to download timeout

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-17400:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> LocalStandaloneKafkaResource.setupKafkaDist fails due to download timeout
> -
>
> Key: FLINK-17400
> URL: https://issues.apache.org/jira/browse/FLINK-17400
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, Test Infrastructure
>Affects Versions: 1.11.0, 1.12.2, 1.13.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=273=logs=68a897ab-3047-5660-245a-cce8f83859f6=375367d9-d72e-5c21-3be0-b45149130f6b
> {code}
> [ERROR] Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 215.598 s <<< FAILURE! - in 
> org.apache.flink.tests.util.kafka.SQLClientKafkaITCase
> [ERROR] testKafka[0: kafka-version:0.10 
> kafka-sql-version:.*kafka-0.10.jar](org.apache.flink.tests.util.kafka.SQLClientKafkaITCase)
>   Time elapsed: 120.023 s  <<< ERROR!
> java.io.IOException: Process ([wget, -q, -P, 
> /tmp/junit6433813062678759117/downloads/1665795946, 
> https://archive.apache.org/dist/kafka/0.10.2.0/kafka_2.11-0.10.2.0.tgz]) 
> exceeded timeout (12) or number of retries (3).
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlockingWithRetry(AutoClosableProcess.java:132)
>   at 
> org.apache.flink.tests.util.cache.AbstractDownloadCache.getOrDownload(AbstractDownloadCache.java:127)
>   at 
> org.apache.flink.tests.util.cache.LolCache.getOrDownload(LolCache.java:31)
>   at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.setupKafkaDist(LocalStandaloneKafkaResource.java:98)
>   at 
> org.apache.flink.tests.util.kafka.LocalStandaloneKafkaResource.before(LocalStandaloneKafkaResource.java:92)
>   at 
> org.apache.flink.util.ExternalResource$1.evaluate(ExternalResource.java:46)
>   at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22686) Incompatible subtask mappings while resuming from unaligned checkpoints

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22686:
---
Labels: stale-blocker  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as a 
Blocker but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 1 days. I have gone ahead and marked it "stale-blocker". If this 
ticket is a Blocker, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> Incompatible subtask mappings while resuming from unaligned checkpoints
> ---
>
> Key: FLINK-22686
> URL: https://issues.apache.org/jira/browse/FLINK-22686
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.0
>Reporter: Arvid Heise
>Priority: Blocker
>  Labels: stale-blocker
> Fix For: 1.13.1
>
> Attachments: topology_1.png, topology_2.png, topology_3.png
>
>
> A user 
> [reported|https://lists.apache.org/x/list.html?u...@flink.apache.org:lte=1M:Flink%201.13.0%20reactive%20mode:%20Job%20stop%20and%20cannot%20restore%20from%20checkpoint]
>  that he encountered an internal error while resuming during reactive mode. 
> There isn't an immediate connection to reactive mode, so it's safe to assume 
> that one rescaling case was not covered.
> {noformat}
> Caused by: java.lang.IllegalStateException: Incompatible subtask mappings: 
> are multiple operators ingesting/producing intermediate results with varying 
> degrees of parallelism?Found RescaleMappings{mappings=[[0, 1, 2, 3, 4, 5, 6, 
> 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 
> 27, 28, 29], [30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 
> 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59], [60, 61, 62, 63, 64, 
> 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 
> 84, 85, 86, 87, 88, 89], [90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 
> 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 
> 117, 118, 119], [120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 
> 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 
> 147, 148, 149], [150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 
> 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 
> 177, 178, 179], [180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 
> 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 
> 207, 208, 209]]} and RescaleMappings{mappings=[[0, 7, 14, 21, 28, 35, 42, 49, 
> 56, 63, 70, 77, 84, 91, 98, 105, 112, 119, 126, 133, 140, 147, 154, 161, 168, 
> 175, 182, 189, 196, 203], [1, 8, 15, 22, 29, 36, 43, 50, 57, 64, 71, 78, 85, 
> 92, 99, 106, 113, 120, 127, 134, 141, 148, 155, 162, 169, 176, 183, 190, 197, 
> 204], [2, 9, 16, 23, 30, 37, 44, 51, 58, 65, 72, 79, 86, 93, 100, 107, 114, 
> 121, 128, 135, 142, 149, 156, 163, 170, 177, 184, 191, 198, 205], [3, 10, 17, 
> 24, 31, 38, 45, 52, 59, 66, 73, 80, 87, 94, 101, 108, 115, 122, 129, 136, 
> 143, 150, 157, 164, 171, 178, 185, 192, 199, 206], [4, 11, 18, 25, 32, 39, 
> 46, 53, 60, 67, 74, 81, 88, 95, 102, 109, 116, 123, 130, 137, 144, 151, 158, 
> 165, 172, 179, 186, 193, 200, 207], [5, 12, 19, 26, 33, 40, 47, 54, 61, 68, 
> 75, 82, 89, 96, 103, 110, 117, 124, 131, 138, 145, 152, 159, 166, 173, 180, 
> 187, 194, 201, 208], [6, 13, 20, 27, 34, 41, 48, 55, 62, 69, 76, 83, 90, 97, 
> 104, 111, 118, 125, 132, 139, 146, 153, 160, 167, 174, 181, 188, 195, 202, 
> 209]]}.
> at 
> org.apache.flink.runtime.checkpoint.TaskStateAssignment.checkSubtaskMapping(TaskStateAssignment.java:322)
>  ~[flink-dist_2.12-1.13.0.jar:1.13.0]
> at 
> org.apache.flink.runtime.checkpoint.TaskStateAssignment.getInputMapping(TaskStateAssignment.java:306)
>  ~[flink-dist_2.12-1.13.0.jar:1.13.0]
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.reDistributeInputChannelStates(StateAssignmentOperation.java:409)
>  ~[flink-dist_2.12-1.13.0.jar:1.13.0]
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignAttemptState(StateAssignmentOperation.java:193)
>  ~[flink-dist_2.12-1.13.0.jar:1.13.0]
> at 
> org.apache.flink.runtime.checkpoint.StateAssignmentOperation.assignStates(StateAssignmentOperation.java:139)
>  ~[flink-dist_2.12-1.13.0.jar:1.13.0]
> at 
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator.restoreLatestCheckpointedStateInternal(CheckpointCoordinator.java:1566)
>  ~[flink-dist_2.12-1.13.0.jar:1.13.0]
> at 
> 

[jira] [Updated] (FLINK-17519) Add Java State Bootstrapping E2E test for Stateful Functions

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-17519:
---
  Labels: auto-deprioritized-critical  (was: stale-critical)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> Add Java State Bootstrapping E2E test for Stateful Functions
> 
>
> Key: FLINK-17519
> URL: https://issues.apache.org/jira/browse/FLINK-17519
> Project: Flink
>  Issue Type: New Feature
>  Components: Stateful Functions
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Major
>  Labels: auto-deprioritized-critical
>
> Add an Stateful Functions E2E test that writes a savepoint using the state 
> bootstrapping API, which is compatible to be restored by the greeter example.
> Then, deploy a Stateful Functions app using the 
> {{StatefulFunctionsAppContainers}} and restoring from the written savepoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17726) Scheduler should take care of tasks directly canceled by TaskManager

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-17726:
---
  Labels: auto-deprioritized-critical  (was: stale-critical)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> Scheduler should take care of tasks directly canceled by TaskManager
> 
>
> Key: FLINK-17726
> URL: https://issues.apache.org/jira/browse/FLINK-17726
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Runtime / Task
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Zhu Zhu
>Priority: Major
>  Labels: auto-deprioritized-critical
>
> JobManager will not trigger failure handling when receiving CANCELED task 
> update. 
> This is because CANCELED tasks are usually caused by another FAILED task. 
> These CANCELED tasks will be restarted by the failover process triggered  
> FAILED task.
> However, if a task is directly CANCELED by TaskManager due to its own runtime 
> issue, the task will not be recovered by JM and thus the job would hang.
> This is a potential issue and we should avoid it.
> A possible solution is to let JobManager treat tasks transitioning to 
> CANCELED from all states except from CANCELING as failed tasks. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20614) Registered sql drivers not deregistered after task finished in session cluster

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20614:
---
Labels: stale-critical  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> Registered sql drivers not deregistered after task finished in session cluster
> --
>
> Key: FLINK-20614
> URL: https://issues.apache.org/jira/browse/FLINK-20614
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Runtime / Task
>Affects Versions: 1.12.0, 1.13.0
>Reporter: Kezhu Wang
>Priority: Critical
>  Labels: stale-critical
>
> {{DriverManager}} keeps registered drivers in its internal data structures 
> which prevents they from gc after task finished. I confirm it in standalone 
> session cluster by observing that {{ChildFirstClassLoader}} could not be 
> reclaimed after several {{GC.run}}, it should exist in all session clusters.
> Tomcat documents 
> [this|https://ci.apache.org/projects/tomcat/tomcat85/docs/jndi-datasource-examples-howto.html#DriverManager,_the_service_provider_mechanism_and_memory_leaks]
>  and fixes/circumvents this with 
> [JdbcLeakPrevention|https://github.com/apache/tomcat/blob/master/java/org/apache/catalina/loader/JdbcLeakPrevention.java#L30].
> Should we solve this in runtime ? Or treat it as connector and clients' 
> responsibility to solve it using 
> {{RuntimeContext.registerUserCodeClassLoaderReleaseHookIfAbsent}} or similar ?
> Personally, it would be nice to solve in runtime as a catch-all to avoid 
> memory-leaking and provide consistent behavior to clients cross per-job and 
> session mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20427) Remove CheckpointConfig.setPreferCheckpointForRecovery because it can lead to data loss

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20427:
---
Labels: stale-critical  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> Remove CheckpointConfig.setPreferCheckpointForRecovery because it can lead to 
> data loss
> ---
>
> Key: FLINK-20427
> URL: https://issues.apache.org/jira/browse/FLINK-20427
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: stale-critical
> Fix For: 1.14.0
>
>
> The {{CheckpointConfig.setPreferCheckpointForRecovery}} allows to configure 
> whether Flink prefers checkpoints for recovery if the 
> {{CompletedCheckpointStore}} contains savepoints and checkpoints. This is 
> problematic because due to this feature, Flink might prefer older checkpoints 
> over newer savepoints for recovery. Since some components expect that the 
> always the latest checkpoint/savepoint is used (e.g. the 
> {{SourceCoordinator}}), it breaks assumptions and can lead to 
> {{SourceSplits}} which are not read. This effectively means that the system 
> loses data. Similarly, this behaviour can cause that exactly once sinks might 
> output results multiple times which violates the processing guarantees. 
> Hence, I believe that we should remove this setting because it changes 
> Flink's behaviour in some very significant way potentially w/o the user 
> noticing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20431) KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers:133->lambda$testCommitOffsetsWithoutAliveFetchers$3:134 expected:<10> but was:<1>

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20431:
---
Labels: auto-unassigned pull-request-available stale-critical 
test-stability  (was: auto-unassigned pull-request-available test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers:133->lambda$testCommitOffsetsWithoutAliveFetchers$3:134
>  expected:<10> but was:<1>
> -
>
> Key: FLINK-20431
> URL: https://issues.apache.org/jira/browse/FLINK-20431
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.2, 1.13.0
>Reporter: Huang Xingbo
>Priority: Critical
>  Labels: auto-unassigned, pull-request-available, stale-critical, 
> test-stability
> Fix For: 1.13.1, 1.12.5
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10351=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5]
> [ERROR] Failures: 
> [ERROR] 
> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers:133->lambda$testCommitOffsetsWithoutAliveFetchers$3:134
>  expected:<10> but was:<1>
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20432) SQLClientSchemaRegistryITCase hangs

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20432:
---
Labels: auto-deprioritized-critical pull-request-available stale-critical 
test-stability  (was: auto-deprioritized-critical pull-request-available 
test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> SQLClientSchemaRegistryITCase hangs
> ---
>
> Key: FLINK-20432
> URL: https://issues.apache.org/jira/browse/FLINK-20432
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.13.0, 1.12.3
>Reporter: Dian Fu
>Priority: Critical
>  Labels: auto-deprioritized-critical, pull-request-available, 
> stale-critical, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10351=logs=739e6eac-8312-5d31-d437-294c4d26fced=a68b8d89-50e9-5977-4500-f4fde4f57f9b
> {code}
> 2020-12-01T01:07:29.6516521Z Dec 01 01:07:29 [INFO] 
> ---
> 2020-12-01T01:07:30.5779942Z Dec 01 01:07:30 [INFO] Running 
> org.apache.flink.tests.util.kafka.SQLClientKafkaITCase
> 2020-12-01T01:08:24.8896937Z Dec 01 01:08:24 [INFO] Tests run: 1, Failures: 
> 0, Errors: 0, Skipped: 0, Time elapsed: 54.305 s - in 
> org.apache.flink.tests.util.kafka.SQLClientKafkaITCase
> 2020-12-01T01:08:24.8900917Z Dec 01 01:08:24 [INFO] Running 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-12-01T01:09:09.0799444Z Dec 01 01:09:09 [INFO] Tests run: 1, Failures: 
> 0, Errors: 0, Skipped: 0, Time elapsed: 44.184 s - in 
> org.apache.flink.tests.util.kafka.StreamingKafkaITCase
> 2020-12-01T01:09:09.0825540Z Dec 01 01:09:09 [INFO] Running 
> org.apache.flink.tests.util.kafka.SQLClientSchemaRegistryITCase
> 2020-12-01T01:41:06.5542739Z 
> ==
> 2020-12-01T01:41:06.5544689Z === WARNING: This E2E Run will time out in the 
> next few minutes. Starting to upload the log output ===
> 2020-12-01T01:41:06.5549107Z 
> ==
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18444) KafkaITCase failing with "Failed to send data to Kafka: This server does not host this topic-partition"

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18444:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> KafkaITCase failing with "Failed to send data to Kafka: This server does not 
> host this topic-partition"
> ---
>
> Key: FLINK-18444
> URL: https://issues.apache.org/jira/browse/FLINK-18444
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.3, 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> Instance on master: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4092=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f
> {code}
> 2020-06-28T21:37:54.8113215Z [ERROR] 
> testMultipleSourcesOnePartition(org.apache.flink.streaming.connectors.kafka.KafkaITCase)
>   Time elapsed: 5.079 s  <<< ERROR!
> 2020-06-28T21:37:54.8113885Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-06-28T21:37:54.8114418Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-06-28T21:37:54.8114905Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:677)
> 2020-06-28T21:37:54.8115397Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:81)
> 2020-06-28T21:37:54.8116254Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1699)
> 2020-06-28T21:37:54.8116857Z  at 
> org.apache.flink.streaming.connectors.kafka.testutils.DataGenerators.generateRandomizedIntegerSequence(DataGenerators.java:120)
> 2020-06-28T21:37:54.8117715Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runMultipleSourcesOnePartitionExactlyOnceTest(KafkaConsumerTestBase.java:933)
> 2020-06-28T21:37:54.8118327Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testMultipleSourcesOnePartition(KafkaITCase.java:107)
> 2020-06-28T21:37:54.8118805Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-06-28T21:37:54.8119859Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-06-28T21:37:54.8120861Z  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-06-28T21:37:54.8121436Z  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2020-06-28T21:37:54.8121899Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-06-28T21:37:54.8122424Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-06-28T21:37:54.8122942Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-06-28T21:37:54.8123406Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-06-28T21:37:54.8123899Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-06-28T21:37:54.8124507Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-06-28T21:37:54.8124978Z  at 
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> 2020-06-28T21:37:54.8125332Z  at 
> java.base/java.lang.Thread.run(Thread.java:834)
> 2020-06-28T21:37:54.8125743Z Caused by: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-06-28T21:37:54.8126305Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
> 2020-06-28T21:37:54.8126961Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
> 2020-06-28T21:37:54.8127766Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
> 2020-06-28T21:37:54.8128570Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
> 2020-06-28T21:37:54.8129140Z  at 
> 

[jira] [Updated] (FLINK-20928) KafkaSourceReaderTest.testOffsetCommitOnCheckpointComplete:189->pollUntil:270 » Timeout

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20928:
---
  Labels: auto-deprioritized-critical test-stability  (was: stale-critical 
test-stability)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> KafkaSourceReaderTest.testOffsetCommitOnCheckpointComplete:189->pollUntil:270 
> » Timeout
> ---
>
> Key: FLINK-20928
> URL: https://issues.apache.org/jira/browse/FLINK-20928
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: auto-deprioritized-critical, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=11861=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5
> {code}
> [ERROR] Tests run: 8, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 93.992 s <<< FAILURE! - in 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest
> [ERROR] 
> testOffsetCommitOnCheckpointComplete(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.086 s  <<< ERROR!
> java.util.concurrent.TimeoutException: The offset commit did not finish 
> before timeout.
>   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.pollUntil(KafkaSourceReaderTest.java:270)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testOffsetCommitOnCheckpointComplete(KafkaSourceReaderTest.java:189)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21065) Passing configuration from TableEnvironmentImpl to MiniCluster is not supported

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21065:
---
  Labels: auto-deprioritized-critical  (was: stale-critical)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> Passing configuration from TableEnvironmentImpl to MiniCluster is not 
> supported
> ---
>
> Key: FLINK-21065
> URL: https://issues.apache.org/jira/browse/FLINK-21065
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Table SQL / API
>Affects Versions: 1.13.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: auto-deprioritized-critical
> Attachments: Screenshot 2021-01-21 at 11.31.21.png
>
>
> While trying to fix a bug in Flink's scheduler, I needed to pass a 
> configuration parameter from the TableEnvironmentITCase (blink planner) to 
> the TaskManager.
> Changing this from:
> {code}
>case "TableEnvironment" =>
> tEnv = TableEnvironmentImpl.create(settings)
> {code}
> to
> {code}
>case "TableEnvironment" =>
> tEnv = TableEnvironmentImpl.create(settings)
> val conf = new Configuration();
> conf.setInteger("taskmanager.numberOfTaskSlots", 1)
> tEnv.getConfig.addConfiguration(conf)
> {code}
> Did not cause any effect on the launched TaskManager.
> It seems that configuration is not properly forwarded through all 
> abstractions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21979) Job can be restarted from the beginning after it reached a terminal state

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21979:
---
Labels: stale-critical  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Critical but is unassigned and neither itself nor its Sub-Tasks have been 
updated for 7 days. I have gone ahead and marked it "stale-critical". If this 
ticket is critical, please either assign yourself or give an update. 
Afterwards, please remove the label or in 7 days the issue will be 
deprioritized.


> Job can be restarted from the beginning after it reached a terminal state
> -
>
> Key: FLINK-21979
> URL: https://issues.apache.org/jira/browse/FLINK-21979
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.11.3, 1.12.2, 1.13.0
>Reporter: Till Rohrmann
>Priority: Critical
>  Labels: stale-critical
> Fix For: 1.14.0
>
>
> Currently, the {{JobMaster}} removes all checkpoints after a job reaches a 
> globally terminal state. Then it notifies the {{Dispatcher}} about the 
> termination of the job. The {{Dispatcher}} then removes the job from the 
> {{SubmittedJobGraphStore}}. If the {{Dispatcher}} process fails before doing 
> that it might get restarted. In this case, the {{Dispatcher}} would still 
> find the job in the {{SubmittedJobGraphStore}} and recover it. Since the 
> {{CompletedCheckpointStore}} is empty, it would start executing this job from 
> the beginning.
> I think we must not remove job state before the job has not been marked as 
> done or made inaccessible for any restarted processes. Concretely, we should 
> first remove the job from the {{SubmittedJobGraphStore}} and only then delete 
> the checkpoints. Ideally all the job related cleanup operation happens 
> atomically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18800) Avro serialization schema doesn't support Kafka key/value serialization

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18800:
---
  Labels: auto-deprioritized-critical  (was: stale-critical)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> Avro serialization schema doesn't support  Kafka key/value serialization
> 
>
> Key: FLINK-18800
> URL: https://issues.apache.org/jira/browse/FLINK-18800
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Kafka, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Affects Versions: 1.11.0, 1.11.1
>Reporter: Mohammad Hossein Gerami
>Priority: Major
>  Labels: auto-deprioritized-critical
>
> {color:#ff8b00}AvroSerializationSchema{color} and 
> {color:#ff8b00}ConfluentRegistryAvroSerializationSchema{color} doesn't 
> support Kafka key/value serialization. I implemented a custom Avro 
> serialization schema for solving this problem. 
> please consensus to implement new class to support kafka key/value 
> serialization.
> for example in the Flink must implement a class like this:
> {code:java}
> public class KafkaAvroRegistrySchemaSerializationSchema extends 
> RegistryAvroSerializationSchema implements 
> KafkaSerializationSchema{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17349) Reduce runtime of LocalExecutorITCase

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-17349:
---
  Labels: auto-deprioritized-critical  (was: stale-critical)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> Reduce runtime of LocalExecutorITCase
> -
>
> Key: FLINK-17349
> URL: https://issues.apache.org/jira/browse/FLINK-17349
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Reporter: Aljoscha Krettek
>Priority: Major
>  Labels: auto-deprioritized-critical
>
> Running the while ITCase takes ~3 minutes on my machine, which is not 
> acceptable for developer productivity and is also quite the burden on our CI 
> systems and PR iteration time.
> The issue is mostly that this does many costly operations, such as compiling 
> SQL queries. Some tests are inefficient in that they do a lot more 
> repetitions or test things that are not needed here. Also {{LocalExecutor}} 
> itself is a bit wasteful because every time a session property is changed, 
> when opening a session, and for other things we trigger reloading/re-parsing 
> the environment, which means all the defined catalogs, sources/sinks, and 
> views.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18652) JDBCAppendTableSink to ClickHouse (data always repeating)

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18652:
---
  Labels: auto-deprioritized-critical  (was: stale-critical)
Priority: Major  (was: Critical)

This issue was labeled "stale-critical" 7 ago and has not received any updates 
so it is being deprioritized. If this ticket is actually Critical, please raise 
the priority and ask a committer to assign you the issue or revive the public 
discussion.


> JDBCAppendTableSink  to  ClickHouse  (data  always  repeating)
> --
>
> Key: FLINK-18652
> URL: https://issues.apache.org/jira/browse/FLINK-18652
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC, Table SQL / Ecosystem
>Affects Versions: 1.10.0
>Reporter: mzz
>Priority: Major
>  Labels: auto-deprioritized-critical
> Attachments: FLINK-UI.png, checkpoint-failed.png
>
>
> Hi all,
>data stream is : kafka->flinkSQL->clickhouse。
>The  window is 15 min,but,15 minutes after the first time, the data 
> kepping repeat sink to ClickHouse, plz  help me ,thx。
> {code:java}
> *// data source from kafka 
> * streamTableEnvironment.sqlUpdate(createTableSql)
> LOG.info("kafka source table has created !")
> val groupTable = streamTableEnvironment.sqlQuery(tempSql)
> streamTableEnvironment.createTemporaryView("aggs_temp_table", groupTable)
> *// this is window sql  ,use ProcessingTime
> *val re_table = streamTableEnvironment.sqlQuery(windowSql)
> re_table.printSchema()
> //groupTable.printSchema()
> val rr = streamTableEnvironment.toAppendStream[Result](re_table)
> * // The data here is printed normally
> *rr.print()
> streamTableEnvironment.createTemporaryView("result_table", rr)
> val s = streamTableEnvironment.sqlQuery(sql)
> *// sink to clickhouse*
> val sink: JDBCAppendTableSink = JDBCAppendTableSink.builder()
>   .setDrivername("ru.yandex.clickhouse.ClickHouseDriver")
>   .setDBUrl(URL)
>   .setQuery(insertCKSql)
>   .setUsername(USERNAME)
>   .setPassword(PASSWORD)
>   .setBatchSize(1)
>   .setParameterTypes(
> Types.LONG, Types.LONG, Types.STRING, Types.STRING, Types.STRING, 
> Types.STRING,
> Types.STRING, Types.STRING, Types.STRING, Types.LONG, Types.LONG, 
> Types.FLOAT,
> Types.LONG, Types.FLOAT, Types.LONG, Types.FLOAT, Types.FLOAT, 
> Types.FLOAT, Types.LONG()
>   )
>   .build()
> streamTableEnvironment.registerTableSink("ckResult", 
> Array[String]("data_date", "point", "platform", "page_name", 
> "component_name", "booth_name", "position1", "advertiser",
>   "adv_code", "request_num", "return_num", "fill_rate", "expose_num", 
> "expose_rate", "click_num", "click_rate", "ecpm", "income", "created_at"),
>   Array[TypeInformation[_]](Types.LONG, Types.LONG, Types.STRING, 
> Types.STRING, Types.STRING, Types.STRING, Types.STRING, Types.STRING, 
> Types.STRING, Types.LONG, Types.LONG, Types.FLOAT, Types.LONG, Types.FLOAT, 
> Types.LONG, Types.FLOAT, Types.FLOAT, Types.FLOAT, Types.LONG()),
>   sink)
> // insert into TableSink
> s.insertInto("ckResult")
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22262) Flink on Kubernetes ConfigMaps are created without OwnerReference

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22262:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Flink on Kubernetes ConfigMaps are created without OwnerReference
> -
>
> Key: FLINK-22262
> URL: https://issues.apache.org/jira/browse/FLINK-22262
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.13.0
>Reporter: Andrea Peruffo
>Priority: Major
>  Labels: stale-major
> Attachments: jm.log
>
>
> According to the documentation:
> [https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/resource-providers/native_kubernetes/#manual-resource-cleanup]
> The ConfigMaps created along with the Flink deployment is supposed to have an 
> OwnerReference pointing to the Deployment itself, unfortunately, this doesn't 
> happen and causes all sorts of issues when the classpath and the jars of the 
> job are updated.
> i.e.:
> Without manually removing the ConfigMap of the Job I cannot update the Jars 
> of the Job.
> Can you please give guidance if there are additional caveats on manually 
> removing the ConfigMap? Any other workaround that can be used?
> Thanks in advance.
> Example ConfigMap:
> {{apiVersion: v1}}
> {{data:}}
> {{ address: akka.tcp://flink@10.0.2.13:6123/user/rpc/jobmanager_2}}
> {{ checkpointID-049: 
> rO0ABXNyADtvcmcuYXBhY2hlLmZsaW5rLnJ1bnRpbWUuc3RhdGUuUmV0cmlldmFibGVTdHJlYW1TdGF0ZUhhbmRsZQABHhjxVZcrAgABTAAYd3JhcHBlZFN0cmVhbVN0YXRlSGFuZGxldAAyTG9yZy9hcGFjaGUvZmxpbmsvcnVudGltZS9zdGF0ZS9TdHJlYW1TdGF0ZUhhbmRsZTt4cHNyADlvcmcuYXBhY2hlLmZsaW5rLnJ1bnRpbWUuc3RhdGUuZmlsZXN5c3RlbS5GaWxlU3RhdGVIYW5kbGUE3HXYYr0bswIAAkoACXN0YXRlU2l6ZUwACGZpbGVQYXRodAAfTG9yZy9hcGFjaGUvZmxpbmsvY29yZS9mcy9QYXRoO3hwAAABOEtzcgAdb3JnLmFwYWNoZS5mbGluay5jb3JlLmZzLlBhdGgAAQIAAUwAA3VyaXQADkxqYXZhL25ldC9VUkk7eHBzcgAMamF2YS5uZXQuVVJJrAF4LkOeSasDAAFMAAZzdHJpbmd0ABJMamF2YS9sYW5nL1N0cmluZzt4cHQAUC9tbnQvZmxpbmsvc3RvcmFnZS9rc2hhL3RheGktcmlkZS1mYXJlLXByb2Nlc3Nvci9jb21wbGV0ZWRDaGVja3BvaW50MDQ0YTc2OWRkNDgxeA==}}
> {{ counter: "50"}}
> {{ sessionId: 0c2b69ee-6b41-48d3-b7fd-1bf2eda94f0f}}
> {{kind: ConfigMap}}
> {{metadata:}}
> {{ annotations:}}
> {{ control-plane.alpha.kubernetes.io/leader: 
> '\{"holderIdentity":"0f25a2cc-e212-46b0-8ba9-faac0732a316","leaseDuration":15.0,"acquireTime":"2021-04-13T14:30:51.439000Z","renewTime":"2021-04-13T14:39:32.011000Z","leaderTransitions":105}'}}
> {{ creationTimestamp: "2021-04-13T14:30:51Z"}}
> {{ labels:}}
> {{ app: taxi-ride-fare-processor}}
> {{ configmap-type: high-availability}}
> {{ type: flink-native-kubernetes}}
> {{ name: 
> taxi-ride-fare-processor--jobmanager-leader}}
> {{ namespace: taxi-ride-fare}}
> {{ resourceVersion: "64100"}}
> {{ selfLink: 
> /api/v1/namespaces/taxi-ride-fare/configmaps/taxi-ride-fare-processor--jobmanager-leader}}
> {{ uid: 9f912495-382a-45de-a789-fd5ad2a2459d}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22096) ServerTransportErrorHandlingTest.testRemoteClose fail

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22096:
---
Labels: stale-major test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> ServerTransportErrorHandlingTest.testRemoteClose fail 
> --
>
> Key: FLINK-22096
> URL: https://issues.apache.org/jira/browse/FLINK-22096
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: stale-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15966=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=05b74a19-4ee4-5036-c46f-ada307df6cf0=6580
> {code:java}
> [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.415 
> s <<< FAILURE! - in 
> org.apache.flink.runtime.io.network.netty.ServerTransportErrorHandlingTest
> [ERROR] 
> testRemoteClose(org.apache.flink.runtime.io.network.netty.ServerTransportErrorHandlingTest)
>   Time elapsed: 1.338 s  <<< ERROR!
> org.apache.flink.shaded.netty4.io.netty.channel.unix.Errors$NativeIoException:
>  bind(..) failed: Address already in use
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22108) Ephemeral socket address was checkpointed to state and restored back in CollectSinkOperatorCoordinator

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22108:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Ephemeral socket address was checkpointed to state and restored back in 
> CollectSinkOperatorCoordinator
> --
>
> Key: FLINK-22108
> URL: https://issues.apache.org/jira/browse/FLINK-22108
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.13.0
>Reporter: Kezhu Wang
>Priority: Major
>  Labels: stale-major
>
> {{CollectSinkOperatorCoordinator}} checkpointed its {{address}} field to 
> state and restored back. That field is listener address of 
> {{CollectSinkFunction}}. After {{resetToCheckpoint}} (eg. global failover}}, 
> {{address}} is meaningless. If client request comes before 
> {{CollectSinkAddressEvent}}, it will use that meaningless address for 
> connection. In best situation, error happens, and nothing hurts. In bad 
> situation, no one knows where the restored address points to now. 
> cc  [~TsReaper] [~ykt836]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21406) Add AvroParquetFileRecordFormat

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21406:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Add AvroParquetFileRecordFormat
> ---
>
> Key: FLINK-21406
> URL: https://issues.apache.org/jira/browse/FLINK-21406
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream, Formats (JSON, Avro, Parquet, ORC, 
> SequenceFile)
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.14.0
>
>
> There is currently no easy way to read avro GenericRecords from parquet via 
> the new {{FileSource}}.
> While helping out a user I started writing FileRecordFormat that could do 
> that, but it requires some finalization.
> The implementation is similar to our ParquetAvroWriters class, in that we 
> just wrap some parquet classes and bridge our FileSystem with their IO 
> abstraction.
> The main goal was to have a format that reads data through our FileSystems, 
> and not work directly against Hadoop to prevent a ClassLoader leak from the 
> S3AFileSystem (threads in a thread pool can keep references to the user 
> classloader).
> According to the user it appears to be working, but it will need some 
> cleanup, ideally support for specific records, support for checkpointing 
> (which should be fairly easy I believe), maybe splitting files (not sure 
> whether this works properly with Parquet) and of course tests + documentation.
> {code}
> public class ParquetAvroFileRecordFormat implements 
> FileRecordFormat {
> private final transient Schema schema;
> public ParquetAvroFileRecordFormat(Schema schema) {
> this.schema = schema;
> }
> @Override
> public Reader createReader(
> Configuration config, Path filePath, long splitOffset, long 
> splitLength)
> throws IOException {
> final FileSystem fs = filePath.getFileSystem();
> final FileStatus status = fs.getFileStatus(filePath);
> final FSDataInputStream in = fs.open(filePath);
> return new MyReader(
> AvroParquetReader.builder(new 
> InputFileWrapper(in, status.getLen()))
> .withDataModel(GenericData.get())
> .build());
> }
> @Override
> public Reader restoreReader(
> Configuration config,
> Path filePath,
> long restoredOffset,
> long splitOffset,
> long splitLength) {
> // not called if checkpointing isn't used
> return null;
> }
> @Override
> public boolean isSplittable() {
> // let's not worry about this for now
> return false;
> }
> @Override
> public TypeInformation getProducedType() {
> return new GenericRecordAvroTypeInfo(schema);
> }
> private static class MyReader implements 
> FileRecordFormat.Reader {
> private final ParquetReader parquetReader;
> private MyReader(ParquetReader parquetReader) {
> this.parquetReader = parquetReader;
> }
> @Nullable
> @Override
> public GenericRecord read() throws IOException {
> return parquetReader.read();
> }
> @Override
> public void close() throws IOException {
> parquetReader.close();
> }
> }
> private static class InputFileWrapper implements InputFile {
> private final FSDataInputStream inputStream;
> private final long length;
> private InputFileWrapper(FSDataInputStream inputStream, long length) {
> this.inputStream = inputStream;
> this.length = length;
> }
> @Override
> public long getLength() {
> return length;
> }
> @Override
> public SeekableInputStream newStream() {
> return new SeekableInputStreamAdapter(inputStream);
> }
> }
> private static class SeekableInputStreamAdapter extends 
> DelegatingSeekableInputStream {
> private final FSDataInputStream inputStream;
> private SeekableInputStreamAdapter(FSDataInputStream inputStream) {
> super(inputStream);
> this.inputStream = inputStream;
> }
> @Override
> public long getPos() throws IOException {
> return inputStream.getPos();
> }
> @Override
> public void seek(long newPos) throws IOException {
> 

[jira] [Updated] (FLINK-21407) Clarify which sources and APIs support which formats

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21407:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Clarify which sources and APIs support which formats
> 
>
> Key: FLINK-21407
> URL: https://issues.apache.org/jira/browse/FLINK-21407
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataSet, API / DataStream, Documentation
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.14.0
>
>
> The DataSet connectors documentation is essentially an empty desert amounting 
> to "you can read files".
> The DataStream connectors documentation do not mention formats like 
> avro/parquet anywhere, nor the possibility to read from filesystems (only the 
> sinks are documented).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22111) ClientTest.testSimpleRequests fail

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22111:
---
Labels: stale-major test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> ClientTest.testSimpleRequests fail
> --
>
> Key: FLINK-22111
> URL: https://issues.apache.org/jira/browse/FLINK-22111
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Queryable State
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: stale-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16056=logs=3d12d40f-c62d-5ec4-6acc-0efe94cc3e89=5d6e4255-0ea8-5e2a-f52c-c881b7872361=15421
> {code:java}
> 21:47:16,289 [nioEventLoopGroup-4-3] WARN  
> org.apache.flink.shaded.netty4.io.netty.channel.ChannelInitializer [] - 
> Failed to initialize a channel. Closing: [id: 0x40eab0f6, L:/172.29.0.2:43846 
> - R:/172.29.0.2:42436]
> org.apache.flink.shaded.netty4.io.netty.channel.ChannelPipelineException: 
> org.apache.flink.queryablestate.network.ClientTest$1 is not a @Sharable 
> handler, so can't be added or removed multiple times.
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.checkMultiplicity(DefaultChannelPipeline.java:600)
>  ~[flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:202)
>  ~[flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:381)
>  ~[flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.addLast(DefaultChannelPipeline.java:370)
>  ~[flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.queryablestate.network.ClientTest$5.initChannel(ClientTest.java:897)
>  ~[test-classes/:?]
>   at 
> org.apache.flink.queryablestate.network.ClientTest$5.initChannel(ClientTest.java:890)
>  ~[test-classes/:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.ChannelInitializer.initChannel(ChannelInitializer.java:129)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.ChannelInitializer.handlerAdded(ChannelInitializer.java:112)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.callHandlerAdded(AbstractChannelHandlerContext.java:938)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.callHandlerAdded0(DefaultChannelPipeline.java:609)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.access$100(DefaultChannelPipeline.java:46)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline$PendingHandlerAddedTask.execute(DefaultChannelPipeline.java:1463)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.callHandlerAddedForAllHandlers(DefaultChannelPipeline.java:1115)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.DefaultChannelPipeline.invokeHandlerAddedIfNeeded(DefaultChannelPipeline.java:650)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.register0(AbstractChannel.java:502)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe.access$200(AbstractChannel.java:417)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannel$AbstractUnsafe$1.run(AbstractChannel.java:474)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
>  [flink-shaded-netty-4.1.49.Final-12.0.jar:?]
>   at 
> 

[jira] [Updated] (FLINK-22347) Support renaming shipped archives on Yarn

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22347:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Support renaming shipped archives on Yarn
> -
>
> Key: FLINK-22347
> URL: https://issues.apache.org/jira/browse/FLINK-22347
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Affects Versions: 1.12.2, 1.13.0
>Reporter: Yang Wang
>Priority: Major
>  Labels: stale-major
>
> Currently, Flink supports to ship archives via {{yarn.ship-archives}}. 
> However, it is impossible to rename the unzipped directory of these archives.
> We just need to use "#" for the renaming. For example, 
> {{python_3_with_flink.tar.gz#environment}}. The unzipped directory will be 
> renamed to environment.
> Some other distributed frameworks(e.g. hadoop mapreduce, spark) also have the 
> similar supports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22090) Upload logs fails

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22090:
---
Labels: stale-major test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Upload logs fails
> -
>
> Key: FLINK-22090
> URL: https://issues.apache.org/jira/browse/FLINK-22090
> Project: Flink
>  Issue Type: Bug
>  Components: Test Infrastructure
>Reporter: Matthias
>Priority: Major
>  Labels: stale-major, test-stability
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=382=logs=9dc1b5dc-bcfa-5f83-eaa7-0cb181ddc267=599dab09-ab33-58b6-4804-349ab7dc2f73]
>  failed just because an {{upload logs}} step failed. It looks like this is an 
> AzureCI problem. Is this a known issue?
> The artifacts seems to be uploaded based on the logs. But [the download 
> link|https://dev.azure.com/mapohl/flink/_build/results?buildId=382=logs=9dc1b5dc-bcfa-5f83-eaa7-0cb181ddc267]
>  does not show up.
> Another build that had the same issue: 
> [test_ci_blinkplanner|https://dev.azure.com/mapohl/flink/_build/results?buildId=383=logs=d1352042-8a7d-50b6-3946-a85d176b7981=7b7009bb-e6bf-5426-3d4b-20b25eada636=75]
>  and 
> [test_ci_build_core|https://dev.azure.com/mapohl/flink/_build/results?buildId=383=logs=9dc1b5dc-bcfa-5f83-eaa7-0cb181ddc267=599dab09-ab33-58b6-4804-349ab7dc2f73=44]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22126) when i set ssl ,the jobmanager got certificate_unknown exception

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22126:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> when i set ssl ,the jobmanager got certificate_unknown exception
> 
>
> Key: FLINK-22126
> URL: https://issues.apache.org/jira/browse/FLINK-22126
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: tonychan
>Priority: Major
>  Labels: stale-major
> Attachments: image-2021-04-07-09-26-16-490.png, 
> image-2021-04-07-09-26-21-958.png
>
>
> !image-2021-04-07-09-26-21-958.png!
> my setup as below:
>  
> keytool -genkeypair -alias ca -keystore ca.keystore -dname "CN=ART002" 
> -storepass ca_keystore_password -keyalg RSA -keysize 4096 -ext "bc=ca:true" 
> -storetype PKCS12
> keytool -exportcert -keystore ca.keystore -alias ca -storepass 
> ca_keystore_password -file ca.cer
> keytool -importcert -keystore ca.truststore -alias ca -storepass 
> ca_truststore_password -file ca.cer -noprompt
>  
> keytool -genkeypair -alias flink.rest -keystore rest.signed.keystore -dname 
> "CN=ART002" -ext "SAN=dns:ART002" -storepass rest_keystore_password -keyalg 
> RSA -keysize 4096 -storetype PKCS12
> keytool -certreq -alias flink.rest -keystore rest.signed.keystore -storepass 
> rest_keystore_password -file rest.csr
> keytool -gencert -alias ca -keystore ca.keystore -storepass 
> ca_keystore_password -ext "SAN=dns:ART002,ip:*.*0.145.92" -infile rest.csr 
> -outfile rest.cer
> keytool -importcert -keystore rest.signed.keystore -storepass 
> rest_keystore_password -file ca.cer -alias ca -noprompt
> keytool -importcert -keystore rest.signed.keystore -storepass 
> rest_keystore_password -file rest.cer -alias flink.rest -noprompt
>  
>  
> security.ssl.rest.enabled: true
> security.ssl.rest.keystore: /data/flink/flink-1.11.2/ssl/rest.signed.keystore
> security.ssl.rest.truststore: /data/flink/flink-1.11.2/ssl/ca.truststore
> security.ssl.rest.keystore-password: rest_keystore_password
> security.ssl.rest.key-password: rest_keystore_password
> security.ssl.rest.truststore-password: ca_truststore_password
>  
>  
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22114) Streaming File Sink s3 end-to-end test fail because the test did not finish after 900 seconds.

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22114:
---
Labels: stale-major test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Streaming File Sink s3 end-to-end test fail because the test  did not finish 
> after 900 seconds.
> ---
>
> Key: FLINK-22114
> URL: https://issues.apache.org/jira/browse/FLINK-22114
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems
>Affects Versions: 1.12.2
>Reporter: Guowei Ma
>Priority: Major
>  Labels: stale-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16038=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=13756
> {code:java}
> Image docker.io/stedolan/jq:latest uses outdated schema1 manifest format. 
> Please upgrade to a schema2 image for better future compatibility. More 
> information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
> 237d5fcd25cf: Pulling fs layer
> a3ed95caeb02: Pulling fs layer
> 1169f6d603e5: Pulling fs layer
> 4dae4fd48813: Pulling fs layer
> 4dae4fd48813: Waiting
> 1169f6d603e5: Verifying Checksum
> 1169f6d603e5: Download complete
> a3ed95caeb02: Verifying Checksum
> a3ed95caeb02: Download complete
> 237d5fcd25cf: Verifying Checksum
> 237d5fcd25cf: Download complete
> 4dae4fd48813: Verifying Checksum
> 4dae4fd48813: Download complete
> 237d5fcd25cf: Pull complete
> a3ed95caeb02: Pull complete
> 1169f6d603e5: Pull complete
> 4dae4fd48813: Pull complete
> Digest: 
> sha256:a61ed0bca213081b64be94c5e1b402ea58bc549f457c2682a86704dd55231e09
> Status: Downloaded newer image for stedolan/jq:latest
> parse error: Invalid numeric literal at line 1, column 6
> Apr 03 22:37:09 Number of produced values 0/6
> Error: No such container: 
> parse error: Invalid numeric literal at line 1, column 6
> Error: No such container: 
> parse error: Invalid numeric literal at line 1, column 6
> Error: No such container: 
> parse error: Invalid numeric literal at line 1, column 6
> Error: No such container: 
> parse error: Invalid numeric literal at line 1, column 6
> Error: No such container: 
> parse error: Invalid numeric literal at line 1, column 6
> 
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22091) env.java.home option didn't take effect in resource negotiator

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22091:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> env.java.home option didn't take effect in resource negotiator
> --
>
> Key: FLINK-22091
> URL: https://issues.apache.org/jira/browse/FLINK-22091
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Configuration
>Affects Versions: 1.11.1, 1.12.2
>Reporter: zlzhang0122
>Priority: Major
>  Labels: stale-major
>
> If we have set the value of env.java.home in flink-conf.yaml, it will take 
> effect in standalone mode, but it won't take effect in resource negotiator 
> such as yarn, kubernetes, etc.. Maybe we can do some change and make it take 
> effect?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22309) Reduce response time when using SQL Client submit query(SELECT)

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22309:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Reduce response time when using SQL Client submit query(SELECT)
> ---
>
> Key: FLINK-22309
> URL: https://issues.apache.org/jira/browse/FLINK-22309
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: stale-major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22194) KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to commit timeout

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22194:
---
Labels: stale-major test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers fail due to 
> commit timeout
> --
>
> Key: FLINK-22194
> URL: https://issues.apache.org/jira/browse/FLINK-22194
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: stale-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16308=logs=b0097207-033c-5d9a-b48c-6d4796fbe60d=e8fcc430-213e-5cce-59d4-6942acf09121=6535
> {code:java}
> [ERROR] 
> testCommitOffsetsWithoutAliveFetchers(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 60.123 s  <<< ERROR!
> java.util.concurrent.TimeoutException: The offset commit did not finish 
> before timeout.
>   at 
> org.apache.flink.core.testutils.CommonTestUtils.waitUtil(CommonTestUtils.java:210)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.pollUntil(KafkaSourceReaderTest.java:285)
>   at 
> org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest.testCommitOffsetsWithoutAliveFetchers(KafkaSourceReaderTest.java:129)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21209) Update stackbrew maintainer field

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21209:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Update stackbrew maintainer field
> -
>
> Key: FLINK-21209
> URL: https://issues.apache.org/jira/browse/FLINK-21209
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Chesnay Schepler
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.14.0
>
>
> The Flink stackbrew file in the official-images repo lists the original 
> maintainers as maintainers of the current images, where ideally it should now 
> just list the Flink project with the 
> [d...@flink.apache.org|mailto:d...@flink.apache.org] mailing address.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22112) EmulatedPubSubSinkTest fail due to pull docker image failure

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22112:
---
Labels: stale-major test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> EmulatedPubSubSinkTest fail due to pull docker image failure
> 
>
> Key: FLINK-22112
> URL: https://issues.apache.org/jira/browse/FLINK-22112
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub
>Affects Versions: 1.12.2
>Reporter: Guowei Ma
>Priority: Major
>  Labels: stale-major, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=16038=logs=08866332-78f7-59e4-4f7e-49a56faa3179=7f606211-1454-543c-70ab-c7a028a1ce8c=8586
> {code:java}
> Apr 03 22:30:22 [ERROR] 
> org.apache.flink.streaming.connectors.gcp.pubsub.EmulatedPubSubSinkTest  Time 
> elapsed: 17.561 s  <<< ERROR!
> Apr 03 22:30:22 com.spotify.docker.client.exceptions.DockerRequestException: 
> Apr 03 22:30:22 Request error: POST 
> unix://localhost:80/images/create?fromImage=google%2Fcloud-sdk=313.0.0: 
> 500, body: {"message":"Head 
> https://registry-1.docker.io/v2/google/cloud-sdk/manifests/313.0.0: Get 
> https://auth.docker.io/token?account=githubactions=repository%3Agoogle%2Fcloud-sdk%3Apull=registry.docker.io:
>  net/http: request canceled (Client.Timeout exceeded while awaiting headers)"}
> Apr 03 22:30:22 
> Apr 03 22:30:22   at 
> com.spotify.docker.client.DefaultDockerClient.requestAndTail(DefaultDockerClient.java:2800)
> Apr 03 22:30:22   at 
> com.spotify.docker.client.DefaultDockerClient.pull(DefaultDockerClient.java:1346)
> Apr 03 22:30:22   at 
> com.spotify.docker.client.DefaultDockerClient.pull(DefaultDockerClient.java:1323)
> Apr 03 22:30:22   at 
> org.apache.flink.streaming.connectors.gcp.pubsub.emulator.GCloudEmulatorManager.launchDocker(GCloudEmulatorManager.java:103)
> Apr 03 22:30:22   at 
> org.apache.flink.streaming.connectors.gcp.pubsub.emulator.GCloudUnitTestBase.launchGCloudEmulator(GCloudUnitTestBase.java:46)
> Apr 03 22:30:22   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
> Apr 03 22:30:22   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> Apr 03 22:30:22   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> Apr 03 22:30:22   at java.lang.reflect.Method.invoke(Method.java:498)
> Apr 03 22:30:22   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> Apr 03 22:30:22   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> Apr 03 22:30:22   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> Apr 03 22:30:22   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
> Apr 03 22:30:22   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> Apr 03 22:30:22   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> Apr 03 22:30:22   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> Apr 03 22:30:22   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> Apr 03 22:30:22   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> Apr 03 22:30:22   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> Apr 03 22:30:22   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> Apr 03 22:30:22   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> Apr 03 22:30:22   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> Apr 03 22:30:22   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22118) Always apply projection push down in blink planner

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22118:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Always apply projection push down in blink planner
> --
>
> Key: FLINK-22118
> URL: https://issues.apache.org/jira/browse/FLINK-22118
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Planner
>Affects Versions: 1.13.0
>Reporter: Shengkai Fang
>Priority: Major
>  Labels: stale-major
>
> Please add the case in `TableSourceTest`.
> {code:java}
>   s"""
>  |CREATE TABLE NestedItemTable (
>  |  `id` INT,
>  |  `result` ROW<
>  | `data_arr` ROW<`value` BIGINT> ARRAY,
>  | `data_map` MAP>>,
>  |  ) WITH (
>  |'connector' = 'values',
>  |'nested-projection-supported' = 'true',
>  |'bounded' = 'true'
>  |  )
>  |""".stripMargin
> util.tableEnv.executeSql(ddl4)
> util.verifyExecPlan(
>   s"""
>  |SELECT
>  |  `result`.`data_arr`[`id`].`value`,
>  |  `result`.`data_map`['item'].`value`
>  |FROM NestedItemTable
>  |""".stripMargin
> )
> {code}
> we can get optimized plan
> {code:java}
> Calc(select=[ITEM(result.data_arr, id).value AS EXPR$0, ITEM(result.data_map, 
> _UTF-16LE'item').value AS EXPR$1])
> +- TableSourceScan(table=[[default_catalog, default_database, 
> NestedItemTable]], fields=[id, result])
> {code}
> but expected is
> {code:java}
> Calc(select=[ITEM(result_data_arr, id).value AS EXPR$0, ITEM(result_data_map, 
> _UTF-16LE'item').value AS EXPR$1])
> +- TableSourceScan(table=[[default_catalog, default_database, 
> NestedItemTable, project=[result_data_arr, result_data_map, id]]], 
> fields=[result_data_arr, result_data_map, id])
> {code}
> It seems the planner doesn't apply the rule to push projection into scan. The 
> reason why we have different results is the optimized plan has more fields 
> than before.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21374) Upgrade built-in Hive to 2.3.8

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21374:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Upgrade built-in Hive to 2.3.8
> --
>
> Key: FLINK-21374
> URL: https://issues.apache.org/jira/browse/FLINK-21374
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.13.0
>Reporter: Yuming Wang
>Priority: Minor
>  Labels: auto-deprioritized-major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21438) Broken links in content.zh/docs/concepts/flink-architecture.md

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21438:
---
  Labels: auto-deprioritized-major pull-request-available  (was: 
pull-request-available stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Broken links in content.zh/docs/concepts/flink-architecture.md
> --
>
> Key: FLINK-21438
> URL: https://issues.apache.org/jira/browse/FLINK-21438
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.13.0
>Reporter: Ting Sun
>Priority: Minor
>  Labels: auto-deprioritized-major, pull-request-available
> Fix For: 1.14.0
>
>
> When reading the Chinese doc I find some links are broken and the responses 
> for these links are 404. And I find that this is because of the format error: 
>  these links are different from their corresponding original links in the 
> English doc, while the links which are identical to the corresponding links 
> in the English doc are OK for me.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21317) Downstream keyed state not work after FlinkKafkaShuffle

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21317:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Downstream keyed state not work after FlinkKafkaShuffle
> ---
>
> Key: FLINK-21317
> URL: https://issues.apache.org/jira/browse/FLINK-21317
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.13.0
>Reporter: Kezhu Wang
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.14.0, 1.13.1
>
>
> {{FlinkKafkaShuffle}} uses 
> {{KeyGroupRangeAssignment.assignKeyToParallelOperator}} to assign partition 
> records to kafka topic partition. The assignment works as follow:
>  # {{KeyGroupRangeAssignment.assignToKeyGroup(Object key, int 
> maxParallelism)}} assigns key to key group.
>  # {{KeyGroupRangeAssignment.computeOperatorIndexForKeyGroup(int 
> maxParallelism, int parallelism, int keyGroupId)}} assigns that key group to 
> operator/subtask index.
> When kafka topic partitions are consumed, they are redistributed by 
> {{KafkaTopicPartitionAssigner.assign(KafkaTopicPartition partition, int 
> numParallelSubtasks)}}. I copied code of this redistribution here.
> {code:java}
> public class KafkaTopicPartitionAssigner {
> public static int assign(KafkaTopicPartition partition, int 
> numParallelSubtasks) {
> int startIndex =
> ((partition.getTopic().hashCode() * 31) & 0x7FFF) % 
> numParallelSubtasks;
> // here, the assumption is that the id of Kafka partitions are always 
> ascending
> // starting from 0, and therefore can be used directly as the offset 
> clockwise from the
> // start index
> return (startIndex + partition.getPartition()) % numParallelSubtasks;
> }
> }
> {code}
> This partition redistribution breaks prerequisites for 
> {{DataStreamUtils.reinterpretAsKeyedStream}}, that is key groups are messed 
> up. The consequence is unusable keyed state. I list deepest stack trace 
> captured here:
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.flink.runtime.state.heap.StateTable.transform(StateTable.java:205)
>   at 
> org.apache.flink.runtime.state.heap.HeapReducingState.add(HeapReducingState.java:100)
> {noformat}
> cc [~ym]  [~sewen] [~AHeise]  [~pnowojski]
> Below is my proposed changes:
> * Make assignment between partition and subtask customizable.
> * Provide a 0-based round-robin assignment. (This is making {{startIndex}} 0 
> in existing assignment algorithms.)
> I saw FLINK-8570, above changes could be helpful if we finally decide to 
> deliver FLINK-8570.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21214) FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint Failed

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21214:
---
  Labels: auto-deprioritized-major test-stability  (was: stale-major 
test-stability)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> FlinkKafkaProducerITCase.testScaleDownBeforeFirstCheckpoint Failed
> --
>
> Key: FLINK-21214
> URL: https://issues.apache.org/jira/browse/FLINK-21214
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.11.0, 1.12.0, 1.13.0
>Reporter: Guowei Ma
>Priority: Minor
>  Labels: auto-deprioritized-major, test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=12687=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5]
>  
> [ERROR] 
> testScaleDownBeforeFirstCheckpoint(org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducerITCase)
>  Time elapsed: 62.857 s <<< ERROR! 
> org.apache.kafka.common.errors.TimeoutException: 
> org.apache.kafka.common.errors.TimeoutException: Timeout expired after 
> 6milliseconds while awaiting InitProducerId 
> Caused by: org.apache.kafka.common.errors.TimeoutException: Timeout expired 
> after 6milliseconds while awaiting InitProducerId 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22189) Flink SQL Client Hudi batch write crashes JobManager

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-22189:
---
Labels: stale-major  (was: )

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 30 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Flink SQL Client Hudi batch write crashes JobManager
> 
>
> Key: FLINK-22189
> URL: https://issues.apache.org/jira/browse/FLINK-22189
> Project: Flink
>  Issue Type: Bug
>  Components: Client / Job Submission
>Affects Versions: 1.11.3
>Reporter: Phil Chen
>Priority: Major
>  Labels: stale-major
> Attachments: 
> flink-pchen-standalonesession-0-Phils-MBP.fios-router.home.log
>
>
> Flink SQL> create table t2(
> > uuid varchar(20),
> > name varchar(10),
> > age int,
> > ts timestamp(3),
> > `partition` varchar(20)
> > )
> > PARTITIONED BY (`partition`)
> > with (
> > 'connector' = 'hudi',
> > 'path' = 'file:///tmp/hudi/t2'
> > );
> [INFO] Table has been created.
> Flink SQL> insert into t2 values
> > ('id1','Danny',23,TIMESTAMP '1970-01-01 00:00:01','par1'),
> > ('id2','Stephen',33,TIMESTAMP '1970-01-01 00:00:02','par1'),
> > ('id3','Julian',53,TIMESTAMP '1970-01-01 00:00:03','par2'),
> > ('id4','Fabian',31,TIMESTAMP '1970-01-01 00:00:04','par2'),
> > ('id5','Sophia',18,TIMESTAMP '1970-01-01 00:00:05','par3'),
> > ('id6','Emma',20,TIMESTAMP '1970-01-01 00:00:06','par3'),
> > ('id7','Bob',44,TIMESTAMP '1970-01-01 00:00:07','par4'),
> > ('id8','Han',56,TIMESTAMP '1970-01-01 00:00:08','par4');
> [INFO] Submitting SQL update statement to the cluster...
> [INFO] Table update statement has been successfully submitted to the cluster:
> Job ID: f5828dfb80550b82f9a2f15afe2439a0
>  
> Check flink dashboard at [http://localhost:8081|http://localhost:8081/]
> JM crashed.
>  
> Hadoop version: 3.2.2
> Apache Hudi version:
> lib/hudi-flink-bundle_2.11-0.8.0.jar
> See attached JobManager log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21562) Add more informative message on CSV parsing errors

2021-05-19 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-21562:
---
  Labels: auto-deprioritized-major  (was: stale-major)
Priority: Minor  (was: Major)

This issue was labeled "stale-major" 7 ago and has not received any updates so 
it is being deprioritized. If this ticket is actually Major, please raise the 
priority and ask a committer to assign you the issue or revive the public 
discussion.


> Add more informative message on CSV parsing errors
> --
>
> Key: FLINK-21562
> URL: https://issues.apache.org/jira/browse/FLINK-21562
> Project: Flink
>  Issue Type: Improvement
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Affects Versions: 1.11.3
>Reporter: Nico Kruber
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> I was parsing a CSV file with comments in it and used {{'csv.allow-comments' 
> = 'true'}} without also passing {{'csv.ignore-parse-errors' = 'true'}} to the 
> table DDL to not hide any actual format errors.
> Since I didn't just have strings in my table, this did of course stumble on 
> the commented-out line with the following error:
> {code}
> 2021-02-16 17:45:53,055 WARN  org.apache.flink.runtime.taskmanager.Task   
>  [] - Source: TableSourceScan(table=[[default_catalog, 
> default_database, airports]], fields=[IATA_CODE, AIRPORT, CITY, STATE, 
> COUNTRY, LATITUDE, LONGITUDE]) -> SinkConversionToTuple2 -> Sink: SQL Client 
> Stream Collect Sink (1/1)#0 (9f3a3965f18ed99ee42580bdb559ba66) switched from 
> RUNNING to FAILED.
> java.io.IOException: Failed to deserialize CSV row.
>   at 
> org.apache.flink.formats.csv.CsvFileSystemFormatFactory$CsvInputFormat.nextRecord(CsvFileSystemFormatFactory.java:257)
>  ~[flink-csv-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.formats.csv.CsvFileSystemFormatFactory$CsvInputFormat.nextRecord(CsvFileSystemFormatFactory.java:162)
>  ~[flink-csv-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:90)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:241)
>  ~[flink-dist_2.12-1.12.1.jar:1.12.1]
> Caused by: java.lang.NumberFormatException: empty String
>   at 
> sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:1842) 
> ~[?:1.8.0_275]
>   at sun.misc.FloatingDecimal.parseDouble(FloatingDecimal.java:110) 
> ~[?:1.8.0_275]
>   at java.lang.Double.parseDouble(Double.java:538) ~[?:1.8.0_275]
>   at 
> org.apache.flink.formats.csv.CsvToRowDataConverters.convertToDouble(CsvToRowDataConverters.java:203)
>  ~[flink-csv-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.formats.csv.CsvToRowDataConverters.lambda$createNullableConverter$ac6e531e$1(CsvToRowDataConverters.java:113)
>  ~[flink-csv-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.formats.csv.CsvToRowDataConverters.lambda$createRowConverter$18bb1dd$1(CsvToRowDataConverters.java:98)
>  ~[flink-csv-1.12.1.jar:1.12.1]
>   at 
> org.apache.flink.formats.csv.CsvFileSystemFormatFactory$CsvInputFormat.nextRecord(CsvFileSystemFormatFactory.java:251)
>  ~[flink-csv-1.12.1.jar:1.12.1]
>   ... 5 more
> {code}
> Two things should be improved here:
> # commented-out lines should be ignored by default (potentially, FLINK-17133 
> addresses this or at least gives the user the power to do so)
> # the error message itself is not very informative: "empty String".
> This ticket is about the latter. I would suggest to have at least a few more 
> pointers to direct the user to finding the source in the CSV file/item/... - 
> here, the data type could just be wrong or the CSV file itself may be 
> wrong/corrupted and the user would need to investigate.
> What exactly may help here, probably depends on the actual input connector 
> this format is currently working with, e.g. line number in a csv file would 
> be best, otherwise that may not be possible but we could show the whole line 
> or at least a few surrounding fields...



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   >