[jira] [Reopened] (FLINK-23525) Docker command fails on Azure: Exit code 137 returned from process: file name '/usr/bin/docker'

2021-09-16 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song reopened FLINK-23525:
--

new instance 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24232=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10856

> Docker command fails on Azure: Exit code 137 returned from process: file name 
> '/usr/bin/docker'
> ---
>
> Key: FLINK-23525
> URL: https://issues.apache.org/jira/browse/FLINK-23525
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines
>Affects Versions: 1.14.0, 1.13.1
>Reporter: Dawid Wysakowicz
>Priority: Critical
>  Labels: auto-deprioritized-blocker, test-stability
> Attachments: screenshot-1.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=21053=logs=4d4a0d10-fca2-5507-8eed-c07f0bdf4887=7b25afdf-cc6c-566f-5459-359dc2585798=10034
> {code}
> ##[error]Exit code 137 returned from process: file name '/usr/bin/docker', 
> arguments 'exec -i -u 1001  -w /home/vsts_azpcontainer 
> 9dca235e075b70486fac576ee17cee722940edf575e5478e0a52def5b46c28b5 
> /__a/externals/node/bin/node /__w/_temp/containerHandlerInvoker.js'.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-09-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416464#comment-17416464
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24232=logs=821b528f-1eed-5598-a3b4-7f748b13f261=6bb545dd-772d-5d8c-f258-f5085fba3295=23445

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] gaoyunhaii commented on a change in pull request #17303: [FLINK-24300] SourceOperator#getAvailableFuture reuses future

2021-09-16 Thread GitBox


gaoyunhaii commented on a change in pull request #17303:
URL: https://github.com/apache/flink/pull/17303#discussion_r710767054



##
File path: 
flink-tests/src/test/java/org/apache/flink/test/streaming/runtime/SourceNAryInputChainingITCase.java
##
@@ -142,6 +153,26 @@ public void testMixedInputsWithMultipleUnionsExecution() 
throws Exception {
 verifySequence(result, 1L, 60L);
 }
 
+// This tests FLINK-24300. The timeout is put here on purpose. Without the 
fix
+// the tests take very long, but still finishes, so it would not hit the 
global timeout
+@Test(timeout = 3_000)

Review comment:
   Then perhaps in this PR we could only a simple UT to check multiple call 
to `SourceOperator#getAvailableFuture()` returns the same object? 




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17308: [FLINK-24291][table-planner]Decimal precision is lost when deserializing in test cases

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17308:
URL: https://github.com/apache/flink/pull/17308#issuecomment-921483643


   
   ## CI report:
   
   * 2fcfaeb1a7b0fbaf8b5f65d4c4f9a7906af4e66b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24244)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17308: [FLINK-24291][table-planner]Decimal precision is lost when deserializing in test cases

2021-09-16 Thread GitBox


flinkbot commented on pull request #17308:
URL: https://github.com/apache/flink/pull/17308#issuecomment-921483643


   
   ## CI report:
   
   * 2fcfaeb1a7b0fbaf8b5f65d4c4f9a7906af4e66b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22643) Too many TCP connections among TaskManagers for large scale jobs

2021-09-16 Thread Zhilong Hong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhilong Hong updated FLINK-22643:
-
Affects Version/s: 1.14.0

> Too many TCP connections among TaskManagers for large scale jobs
> 
>
> Key: FLINK-22643
> URL: https://issues.apache.org/jira/browse/FLINK-22643
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Zhilong Hong
>Priority: Minor
>  Labels: auto-deprioritized-major
> Fix For: 1.14.0
>
>
> For the large scale jobs, there will be too many TCP connections among 
> TaskManagers. Let's take an example.
> For a streaming job with 20 JobVertices, each JobVertex has 500 parallelism. 
> We divide the vertices into 5 slot sharing groups. Each TaskManager has 5 
> slots. Thus there will be 400 taskmanagers in this job. Let's assume that job 
> runs on a cluster with 20 machines.
> If all the job edges are all-to-all edges, there will be 19 * 20 * 399 * 2 = 
> 303,240 TCP connections for each machine. If we run several jobs on this 
> cluster, the TCP connections may exceed the maximum limit of linux, which is 
> 1,048,576. This will stop the TaskManagers from creating new TCP connections 
> and cause task failovers.
> As we run our production jobs on a K8S cluster, the job always failover due 
> to exceptions related to network, such as {{Sending the partition request to 
> 'null' failed}}, and etc.
> We think that we can decrease the number of connections by letting tasks 
> reuse the same connection. We implemented a POC that makes all tasks on the 
> same TaskManager reuse one TCP connection. For the example job we mentioned 
> above, the number of connections will decrease from 303,240 to 15960. With 
> the POC, the frequency of meeting exceptions related to network in our 
> production jobs drops significantly.
> The POC is illustrated in: 
> https://github.com/wsry/flink/commit/bf1c09e80450f40d018a1d1d4fe3dfd2de777fdc
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22643) Too many TCP connections among TaskManagers for large scale jobs

2021-09-16 Thread Zhilong Hong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhilong Hong updated FLINK-22643:
-
Affects Version/s: (was: 1.13.0)
   1.13.2

> Too many TCP connections among TaskManagers for large scale jobs
> 
>
> Key: FLINK-22643
> URL: https://issues.apache.org/jira/browse/FLINK-22643
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.14.0, 1.13.2
>Reporter: Zhilong Hong
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> For the large scale jobs, there will be too many TCP connections among 
> TaskManagers. Let's take an example.
> For a streaming job with 20 JobVertices, each JobVertex has 500 parallelism. 
> We divide the vertices into 5 slot sharing groups. Each TaskManager has 5 
> slots. Thus there will be 400 taskmanagers in this job. Let's assume that job 
> runs on a cluster with 20 machines.
> If all the job edges are all-to-all edges, there will be 19 * 20 * 399 * 2 = 
> 303,240 TCP connections for each machine. If we run several jobs on this 
> cluster, the TCP connections may exceed the maximum limit of linux, which is 
> 1,048,576. This will stop the TaskManagers from creating new TCP connections 
> and cause task failovers.
> As we run our production jobs on a K8S cluster, the job always failover due 
> to exceptions related to network, such as {{Sending the partition request to 
> 'null' failed}}, and etc.
> We think that we can decrease the number of connections by letting tasks 
> reuse the same connection. We implemented a POC that makes all tasks on the 
> same TaskManager reuse one TCP connection. For the example job we mentioned 
> above, the number of connections will decrease from 303,240 to 15960. With 
> the POC, the frequency of meeting exceptions related to network in our 
> production jobs drops significantly.
> The POC is illustrated in: 
> https://github.com/wsry/flink/commit/bf1c09e80450f40d018a1d1d4fe3dfd2de777fdc
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22643) Too many TCP connections among TaskManagers for large scale jobs

2021-09-16 Thread Zhilong Hong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhilong Hong updated FLINK-22643:
-
Fix Version/s: (was: 1.14.0)

> Too many TCP connections among TaskManagers for large scale jobs
> 
>
> Key: FLINK-22643
> URL: https://issues.apache.org/jira/browse/FLINK-22643
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Network
>Affects Versions: 1.13.0, 1.14.0
>Reporter: Zhilong Hong
>Priority: Minor
>  Labels: auto-deprioritized-major
>
> For the large scale jobs, there will be too many TCP connections among 
> TaskManagers. Let's take an example.
> For a streaming job with 20 JobVertices, each JobVertex has 500 parallelism. 
> We divide the vertices into 5 slot sharing groups. Each TaskManager has 5 
> slots. Thus there will be 400 taskmanagers in this job. Let's assume that job 
> runs on a cluster with 20 machines.
> If all the job edges are all-to-all edges, there will be 19 * 20 * 399 * 2 = 
> 303,240 TCP connections for each machine. If we run several jobs on this 
> cluster, the TCP connections may exceed the maximum limit of linux, which is 
> 1,048,576. This will stop the TaskManagers from creating new TCP connections 
> and cause task failovers.
> As we run our production jobs on a K8S cluster, the job always failover due 
> to exceptions related to network, such as {{Sending the partition request to 
> 'null' failed}}, and etc.
> We think that we can decrease the number of connections by letting tasks 
> reuse the same connection. We implemented a POC that makes all tasks on the 
> same TaskManager reuse one TCP connection. For the example job we mentioned 
> above, the number of connections will decrease from 303,240 to 15960. With 
> the POC, the frequency of meeting exceptions related to network in our 
> production jobs drops significantly.
> The POC is illustrated in: 
> https://github.com/wsry/flink/commit/bf1c09e80450f40d018a1d1d4fe3dfd2de777fdc
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23493) python tests hang on Azure

2021-09-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416444#comment-17416444
 ] 

Xintong Song commented on FLINK-23493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24231=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22447

> python tests hang on Azure
> --
>
> Key: FLINK-23493
> URL: https://issues.apache.org/jira/browse/FLINK-23493
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.14.0, 1.13.1, 1.12.4
>Reporter: Dawid Wysakowicz
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=20898=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22829



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-11484) Blink java.util.concurrent.TimeoutException

2021-09-16 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416439#comment-17416439
 ] 

godfrey he commented on FLINK-11484:


blink planner is port to the flink community from v1.19, while this issue in 
v1.5.5.  I will close this issue, and free to open new issue if similar error 
is found. [~pijing] WDYT ?

> Blink java.util.concurrent.TimeoutException
> ---
>
> Key: FLINK-11484
> URL: https://issues.apache.org/jira/browse/FLINK-11484
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.5.5
> Environment: The link of blink source code: 
> [github.com/apache/flink/tree/blink|https://github.com/apache/flink/tree/blink]
>Reporter: pj
>Priority: Minor
>  Labels: auto-deprioritized-major, blink
> Attachments: 1.png, code.png, dashboard.png, error.png, 
> image-2019-02-13-10-54-16-880.png
>
>
> *If I run blink application on yarn and the parallelism number larger than 1.*
> *Following is the command :*
> ./flink run -m yarn-cluster -ynm FLINK_NG_ENGINE_1 -ys 4 -yn 10 -ytm 5120 -p 
> 40 -c XXMain ~/xx.jar
> *Following is the code:*
> {{DataStream outputStream = tableEnv.toAppendStream(curTable, Row.class); 
> outputStream.print();}}
> *{{The whole subtask of application will hang a long time and finally the 
> }}{{toAppendStream()}}{{ function will throw an exception like below:}}*
> {{org.apache.flink.client.program.ProgramInvocationException: Job failed. 
> (JobID: f5e4f7243d06035202e8fa250c364304) at 
> org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:276)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:482) 
> at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeInternal(StreamContextEnvironment.java:85)
>  at 
> org.apache.flink.streaming.api.environment.StreamContextEnvironment.executeInternal(StreamContextEnvironment.java:37)
>  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1893)
>  at com.ngengine.main.KafkaMergeMain.startApp(KafkaMergeMain.java:352) at 
> com.ngengine.main.KafkaMergeMain.main(KafkaMergeMain.java:94) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:561)
>  at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:445)
>  at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:419) 
> at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:786) 
> at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:280) 
> at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215) at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1029)
>  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1105) 
> at java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1692)
>  at 
> org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
>  at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1105) 
> Caused by: java.util.concurrent.TimeoutException at 
> org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:834)
>  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>  at java.lang.Thread.run(Thread.java:745)}}{{}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17205: [FLINK-24168][table-planner] Update MATCH_ROWTIME function which could receive 0 argument or 1 argument

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17205:
URL: https://github.com/apache/flink/pull/17205#issuecomment-915449441


   
   ## CI report:
   
   * c8ea9552f00c97efc8cbb3e19fa23b89b8c69772 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24181)
 
   * 5e5c83a827211b9a5d8337d58bcf352c7fc37a4f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24240)
 
   * cdf36a2de30c3f50128af2ec881cd65b66b00bfa Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24243)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17308: [FLINK-24291][table-planner]Decimal precision is lost when deserializing in test cases

2021-09-16 Thread GitBox


flinkbot commented on pull request #17308:
URL: https://github.com/apache/flink/pull/17308#issuecomment-921452710


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2fcfaeb1a7b0fbaf8b5f65d4c4f9a7906af4e66b (Fri Sep 17 
04:26:19 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416430#comment-17416430
 ] 

shiwuliang edited comment on FLINK-24314 at 9/17/21, 4:18 AM:
--

[~zlzhang0122]

by the way, do u mean that I should config like this? 

!image-2021-09-17-12-16-46-277.png!

 

Seems like this config is little confused. I should config checkpoint dir in 
rocksdbstatebackend and set this config in checkpointconfig again


was (Author: shiwuliang):
[~zlzhang0122]

by the way, do u mean that I should config like this? 

!image-2021-09-17-12-16-46-277.png!

> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png, 
> image-2021-09-17-12-03-58-285.png, image-2021-09-17-12-04-55-009.png, 
> image-2021-09-17-12-16-46-277.png
>
>
> When I config to use `RocksDBStatebackend`, 
>  
> {code:java}
> //代码占位符
> RocksDBStateBackend rocksDBStateBackend = new 
> RocksDBStateBackend("hdfs://");
> streamEnv.setStateBackend(rocksDBStateBackend);{code}
>  
> there are some exception like this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> Seems like the RocksdbStatebackend will use FsStateBackend to store 
> checkpoints. So it means that I have used FileSystemStateBackend, why does 
> this exception occur?
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shiwuliang updated FLINK-24314:
---
Attachment: image-2021-09-17-12-16-46-277.png

> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png, 
> image-2021-09-17-12-03-58-285.png, image-2021-09-17-12-04-55-009.png, 
> image-2021-09-17-12-16-46-277.png
>
>
> When I config to use `RocksDBStatebackend`, 
>  
> {code:java}
> //代码占位符
> RocksDBStateBackend rocksDBStateBackend = new 
> RocksDBStateBackend("hdfs://");
> streamEnv.setStateBackend(rocksDBStateBackend);{code}
>  
> there are some exception like this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> Seems like the RocksdbStatebackend will use FsStateBackend to store 
> checkpoints. So it means that I have used FileSystemStateBackend, why does 
> this exception occur?
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416430#comment-17416430
 ] 

shiwuliang commented on FLINK-24314:


[~zlzhang0122]

by the way, do u mean that I should config like this? 

!image-2021-09-17-12-16-46-277.png!

> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png, 
> image-2021-09-17-12-03-58-285.png, image-2021-09-17-12-04-55-009.png, 
> image-2021-09-17-12-16-46-277.png
>
>
> When I config to use `RocksDBStatebackend`, 
>  
> {code:java}
> //代码占位符
> RocksDBStateBackend rocksDBStateBackend = new 
> RocksDBStateBackend("hdfs://");
> streamEnv.setStateBackend(rocksDBStateBackend);{code}
>  
> there are some exception like this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> Seems like the RocksdbStatebackend will use FsStateBackend to store 
> checkpoints. So it means that I have used FileSystemStateBackend, why does 
> this exception occur?
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24291) Decimal precision is lost when deserializing in test cases

2021-09-16 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-24291:
---
Fix Version/s: 1.14.1
   1.13.3

> Decimal precision is lost when deserializing in test cases
> --
>
> Key: FLINK-24291
> URL: https://issues.apache.org/jira/browse/FLINK-24291
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyangzhong
>Assignee: xuyangzhong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.13.3, 1.15.0, 1.14.1
>
>
> When added the test case following into FileSystemItCaseBase:
> {code:java}
> // create table
> tableEnv.executeSql(
>   s"""
>  |create table test2 (
>  |  c0 decimal(10,0), c1 int
>  |) with (
>  |  'connector' = 'filesystem',
>  |  'path' = '/Users/zhongxuyang/test/test',
>  |  'format' = 'testcsv'
>  |)
>""".stripMargin
> )
> //test file content is:
> //2113554011,1
> //2113554022,2
> {code}
> and
> {code:java}
> // select sql
> @Test
> def myTest2(): Unit={
>   check(
> "SELECT c0 FROM test2",
> Seq(
>   row(2113554011),
>   row(2113554022)
> ))
> }
> {code}
> i got an exception :
> {code}
> java.lang.RuntimeException: Failed to fetch next 
> resultjava.lang.RuntimeException: Failed to fetch next result
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>  at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
>  at java.util.Iterator.forEachRemaining(Iterator.java:115) at 
> org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109) 
> at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:106)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.check(BatchFileSystemITCaseBase.scala:46)
>  at 
> org.apache.flink.table.planner.runtime.FileSystemITCaseBase$class.myTest2(FileSystemITCaseBase.scala:128)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.myTest2(BatchFileSystemITCaseBase.scala:33)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> 

[jira] [Updated] (FLINK-24291) Decimal precision is lost when deserializing in test cases

2021-09-16 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he updated FLINK-24291:
---
Fix Version/s: 1.15.0

> Decimal precision is lost when deserializing in test cases
> --
>
> Key: FLINK-24291
> URL: https://issues.apache.org/jira/browse/FLINK-24291
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyangzhong
>Assignee: xuyangzhong
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> When added the test case following into FileSystemItCaseBase:
> {code:java}
> // create table
> tableEnv.executeSql(
>   s"""
>  |create table test2 (
>  |  c0 decimal(10,0), c1 int
>  |) with (
>  |  'connector' = 'filesystem',
>  |  'path' = '/Users/zhongxuyang/test/test',
>  |  'format' = 'testcsv'
>  |)
>""".stripMargin
> )
> //test file content is:
> //2113554011,1
> //2113554022,2
> {code}
> and
> {code:java}
> // select sql
> @Test
> def myTest2(): Unit={
>   check(
> "SELECT c0 FROM test2",
> Seq(
>   row(2113554011),
>   row(2113554022)
> ))
> }
> {code}
> i got an exception :
> {code}
> java.lang.RuntimeException: Failed to fetch next 
> resultjava.lang.RuntimeException: Failed to fetch next result
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>  at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
>  at java.util.Iterator.forEachRemaining(Iterator.java:115) at 
> org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109) 
> at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:106)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.check(BatchFileSystemITCaseBase.scala:46)
>  at 
> org.apache.flink.table.planner.runtime.FileSystemITCaseBase$class.myTest2(FileSystemITCaseBase.scala:128)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.myTest2(BatchFileSystemITCaseBase.scala:33)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> 

[jira] [Assigned] (FLINK-24291) Decimal precision is lost when deserializing in test cases

2021-09-16 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-24291:
--

Assignee: xuyangzhong

> Decimal precision is lost when deserializing in test cases
> --
>
> Key: FLINK-24291
> URL: https://issues.apache.org/jira/browse/FLINK-24291
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyangzhong
>Assignee: xuyangzhong
>Priority: Minor
>  Labels: pull-request-available
>
> When added the test case following into FileSystemItCaseBase:
> {code:java}
> // create table
> tableEnv.executeSql(
>   s"""
>  |create table test2 (
>  |  c0 decimal(10,0), c1 int
>  |) with (
>  |  'connector' = 'filesystem',
>  |  'path' = '/Users/zhongxuyang/test/test',
>  |  'format' = 'testcsv'
>  |)
>""".stripMargin
> )
> //test file content is:
> //2113554011,1
> //2113554022,2
> {code}
> and
> {code:java}
> // select sql
> @Test
> def myTest2(): Unit={
>   check(
> "SELECT c0 FROM test2",
> Seq(
>   row(2113554011),
>   row(2113554022)
> ))
> }
> {code}
> i got an exception :
> {code}
> java.lang.RuntimeException: Failed to fetch next 
> resultjava.lang.RuntimeException: Failed to fetch next result
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>  at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
>  at java.util.Iterator.forEachRemaining(Iterator.java:115) at 
> org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109) 
> at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:106)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.check(BatchFileSystemITCaseBase.scala:46)
>  at 
> org.apache.flink.table.planner.runtime.FileSystemITCaseBase$class.myTest2(FileSystemITCaseBase.scala:128)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.myTest2(BatchFileSystemITCaseBase.scala:33)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)

[jira] [Updated] (FLINK-24291) Decimal precision is lost when deserializing in test cases

2021-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24291:
---
Labels: pull-request-available  (was: )

> Decimal precision is lost when deserializing in test cases
> --
>
> Key: FLINK-24291
> URL: https://issues.apache.org/jira/browse/FLINK-24291
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyangzhong
>Priority: Minor
>  Labels: pull-request-available
>
> When added the test case following into FileSystemItCaseBase:
> {code:java}
> // create table
> tableEnv.executeSql(
>   s"""
>  |create table test2 (
>  |  c0 decimal(10,0), c1 int
>  |) with (
>  |  'connector' = 'filesystem',
>  |  'path' = '/Users/zhongxuyang/test/test',
>  |  'format' = 'testcsv'
>  |)
>""".stripMargin
> )
> //test file content is:
> //2113554011,1
> //2113554022,2
> {code}
> and
> {code:java}
> // select sql
> @Test
> def myTest2(): Unit={
>   check(
> "SELECT c0 FROM test2",
> Seq(
>   row(2113554011),
>   row(2113554022)
> ))
> }
> {code}
> i got an exception :
> {code}
> java.lang.RuntimeException: Failed to fetch next 
> resultjava.lang.RuntimeException: Failed to fetch next result
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>  at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
>  at java.util.Iterator.forEachRemaining(Iterator.java:115) at 
> org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109) 
> at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:106)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.check(BatchFileSystemITCaseBase.scala:46)
>  at 
> org.apache.flink.table.planner.runtime.FileSystemITCaseBase$class.myTest2(FileSystemITCaseBase.scala:128)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.myTest2(BatchFileSystemITCaseBase.scala:33)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)
>  at 
> 

[GitHub] [flink] xuyangzhong opened a new pull request #17308: [FLINK-24291][table-planner]Decimal precision is lost when deserializing in test cases

2021-09-16 Thread GitBox


xuyangzhong opened a new pull request #17308:
URL: https://github.com/apache/flink/pull/17308


   ## What is the purpose of the change
   
   Get the correct converter which is used to deserialize a decimal field by 
setting it the correct precision information.
   
   ## Brief change log
   
   It caused by the different code route between executing actual sql examples 
and testing test cases. So mainly change the code in 
_TestRowDataCsvInputFormat_.
   
 - Replace the TypeInformation which doesn't contain precision info with 
DataType which contains.
 - Add some test case in _FileSystemITCaseBase_
   
   ## Verifying this change
   
   The test case added can verify this change.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24291) Decimal precision is lost when deserializing in test cases

2021-09-16 Thread xuyangzhong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xuyangzhong updated FLINK-24291:

Summary: Decimal precision is lost when deserializing in test cases  (was: 
Decimal precision is lost when deserializing)

> Decimal precision is lost when deserializing in test cases
> --
>
> Key: FLINK-24291
> URL: https://issues.apache.org/jira/browse/FLINK-24291
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: xuyangzhong
>Priority: Minor
>
> When added the test case following into FileSystemItCaseBase:
> {code:java}
> // create table
> tableEnv.executeSql(
>   s"""
>  |create table test2 (
>  |  c0 decimal(10,0), c1 int
>  |) with (
>  |  'connector' = 'filesystem',
>  |  'path' = '/Users/zhongxuyang/test/test',
>  |  'format' = 'testcsv'
>  |)
>""".stripMargin
> )
> //test file content is:
> //2113554011,1
> //2113554022,2
> {code}
> and
> {code:java}
> // select sql
> @Test
> def myTest2(): Unit={
>   check(
> "SELECT c0 FROM test2",
> Seq(
>   row(2113554011),
>   row(2113554022)
> ))
> }
> {code}
> i got an exception :
> {code}
> java.lang.RuntimeException: Failed to fetch next 
> resultjava.lang.RuntimeException: Failed to fetch next result
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:109)
>  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:80)
>  at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:370)
>  at java.util.Iterator.forEachRemaining(Iterator.java:115) at 
> org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:109) 
> at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:140)
>  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:106)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.check(BatchFileSystemITCaseBase.scala:46)
>  at 
> org.apache.flink.table.planner.runtime.FileSystemITCaseBase$class.myTest2(FileSystemITCaseBase.scala:128)
>  at 
> org.apache.flink.table.planner.runtime.batch.sql.BatchFileSystemITCaseBase.myTest2(BatchFileSystemITCaseBase.scala:33)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) 
> at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) 
> at org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45) 
> at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
>  at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366) at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
>  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
>  at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331) at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79) at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329) at 
> org.junit.runners.ParentRunner.access$100(ParentRunner.java:66) at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54) at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20) at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306) at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413) at 
> org.junit.runner.JUnitCore.run(JUnitCore.java:137) at 
> 

[jira] [Commented] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416427#comment-17416427
 ] 

shiwuliang commented on FLINK-24314:


[~zlzhang0122] hi, thanks for your reply.

 

But in RocksDBStateBackend's constructor, it create a FsStateBackend to store 
checkpoint data:

!image-2021-09-17-12-04-55-009.png!

 

I use `new RocksDBStateBackend("hdfs://")` to create a RocksDBStateBackend, 
seems like the 'hdfs://' is the checkpoint storage location...

> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png, 
> image-2021-09-17-12-03-58-285.png, image-2021-09-17-12-04-55-009.png
>
>
> When I config to use `RocksDBStatebackend`, 
>  
> {code:java}
> //代码占位符
> RocksDBStateBackend rocksDBStateBackend = new 
> RocksDBStateBackend("hdfs://");
> streamEnv.setStateBackend(rocksDBStateBackend);{code}
>  
> there are some exception like this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> Seems like the RocksdbStatebackend will use FsStateBackend to store 
> checkpoints. So it means that I have used FileSystemStateBackend, why does 
> this exception occur?
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shiwuliang updated FLINK-24314:
---
Attachment: image-2021-09-17-12-04-55-009.png

> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png, 
> image-2021-09-17-12-03-58-285.png, image-2021-09-17-12-04-55-009.png
>
>
> When I config to use `RocksDBStatebackend`, 
>  
> {code:java}
> //代码占位符
> RocksDBStateBackend rocksDBStateBackend = new 
> RocksDBStateBackend("hdfs://");
> streamEnv.setStateBackend(rocksDBStateBackend);{code}
>  
> there are some exception like this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> Seems like the RocksdbStatebackend will use FsStateBackend to store 
> checkpoints. So it means that I have used FileSystemStateBackend, why does 
> this exception occur?
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shiwuliang updated FLINK-24314:
---
Attachment: image-2021-09-17-12-03-58-285.png

> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png, 
> image-2021-09-17-12-03-58-285.png
>
>
> When I config to use `RocksDBStatebackend`, 
>  
> {code:java}
> //代码占位符
> RocksDBStateBackend rocksDBStateBackend = new 
> RocksDBStateBackend("hdfs://");
> streamEnv.setStateBackend(rocksDBStateBackend);{code}
>  
> there are some exception like this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> Seems like the RocksdbStatebackend will use FsStateBackend to store 
> checkpoints. So it means that I have used FileSystemStateBackend, why does 
> this exception occur?
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] RocMarshal commented on pull request #17279: [Hotfix][streaming] Fix a typo.

2021-09-16 Thread GitBox


RocMarshal commented on pull request #17279:
URL: https://github.com/apache/flink/pull/17279#issuecomment-921439012


   Hi, @alpinegizmo . Could you help me to check it ? Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xuyangzhong closed pull request #17089: [FLINK-22601][table-planner] PushWatermarkIntoScan should produce digest created by Expression.

2021-09-16 Thread GitBox


xuyangzhong closed pull request #17089:
URL: https://github.com/apache/flink/pull/17089


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17205: [FLINK-24168][table-planner] Update MATCH_ROWTIME function which could receive 0 argument or 1 argument

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17205:
URL: https://github.com/apache/flink/pull/17205#issuecomment-915449441


   
   ## CI report:
   
   * c8ea9552f00c97efc8cbb3e19fa23b89b8c69772 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24181)
 
   * 5e5c83a827211b9a5d8337d58bcf352c7fc37a4f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24240)
 
   * cdf36a2de30c3f50128af2ec881cd65b66b00bfa UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread zlzhang0122 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416426#comment-17416426
 ] 

zlzhang0122 edited comment on FLINK-24314 at 9/17/21, 3:54 AM:
---

In Flink 1.13, the storage of the state and checkpoint was separated,  so you 
should set statebackend and checkpoint location at the same time. In your case, 
you just set the statebackend, so it use memory to store the checkpoint as 
default and since the size is beyound the limit, it throw an exception.


was (Author: zlzhang0122):
In Flink 1.13, the storage of the state and checkpoint was separated,  so you 
should set statebackend and checkpoint location at the same time. In your case, 
you just set the statebackend, so it use memory to store the checkpoint as 
default and since the size is beyound the limit, it report an exception.

> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png
>
>
> When I config to use `RocksDBStatebackend`, 
>  
> {code:java}
> //代码占位符
> RocksDBStateBackend rocksDBStateBackend = new 
> RocksDBStateBackend("hdfs://");
> streamEnv.setStateBackend(rocksDBStateBackend);{code}
>  
> there are some exception like this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> Seems like the RocksdbStatebackend will use FsStateBackend to store 
> checkpoints. So it means that I have used FileSystemStateBackend, why does 
> this exception occur?
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread zlzhang0122 (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416426#comment-17416426
 ] 

zlzhang0122 commented on FLINK-24314:
-

In Flink 1.13, the storage of the state and checkpoint was separated,  so you 
should set statebackend and checkpoint location at the same time. In your case, 
you just set the statebackend, so it use memory to store the checkpoint as 
default and since the size is beyound the limit, it report an exception.

> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png
>
>
> When I config to use `RocksDBStatebackend`, 
>  
> {code:java}
> //代码占位符
> RocksDBStateBackend rocksDBStateBackend = new 
> RocksDBStateBackend("hdfs://");
> streamEnv.setStateBackend(rocksDBStateBackend);{code}
>  
> there are some exception like this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> Seems like the RocksdbStatebackend will use FsStateBackend to store 
> checkpoints. So it means that I have used FileSystemStateBackend, why does 
> this exception occur?
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] RocMarshal edited a comment on pull request #16962: [FLINK-15352][connector-jdbc] Develop MySQLCatalog to connect Flink with MySQL tables and ecosystem.

2021-09-16 Thread GitBox


RocMarshal edited a comment on pull request #16962:
URL: https://github.com/apache/flink/pull/16962#issuecomment-920520153


   Hi, @wuchong @twalthr @zhuzhurk @leonardBang . I made some changes in the 
ITCase part of the PR.  Could you help me to review this PR if you have free 
time? Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17306: [BP-1.13][FLINK-24277][connector/kafka] Add configuration for committing offset on checkpoint and disable it if group ID is not speci

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17306:
URL: https://github.com/apache/flink/pull/17306#issuecomment-921415622


   
   ## CI report:
   
   * 67cc66fea096aae34129fb36bfff203e2b96b3eb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24241)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17307: [BP-1.12][FLINK-24277][connector/kafka] Add configuration for committing offset on checkpoint and disable it if group ID is not speci

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17307:
URL: https://github.com/apache/flink/pull/17307#issuecomment-921415679


   
   ## CI report:
   
   * 2d451af80e6af332a6f0d336e47244e59a790d75 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24242)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17205: [FLINK-24168][table-planner] Update MATCH_ROWTIME function which could receive 0 argument or 1 argument

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17205:
URL: https://github.com/apache/flink/pull/17205#issuecomment-915449441


   
   ## CI report:
   
   * c8ea9552f00c97efc8cbb3e19fa23b89b8c69772 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24181)
 
   * 5e5c83a827211b9a5d8337d58bcf352c7fc37a4f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24240)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-15493) FlinkKafkaInternalProducerITCase.testProducerWhenCommitEmptyPartitionsToOutdatedTxnCoordinator failed on travis

2021-09-16 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416421#comment-17416421
 ] 

Xintong Song commented on FLINK-15493:
--

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24230=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=6247

> FlinkKafkaInternalProducerITCase.testProducerWhenCommitEmptyPartitionsToOutdatedTxnCoordinator
>  failed on travis
> ---
>
> Key: FLINK-15493
> URL: https://issues.apache.org/jira/browse/FLINK-15493
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.10.0, 1.13.0, 1.14.0, 1.15.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: auto-deprioritized-critical, auto-unassigned, 
> test-stability
> Fix For: 1.14.0, 1.13.3, 1.15.0
>
>
> FlinkKafkaInternalProducerITCase.testProducerWhenCommitEmptyPartitionsToOutdatedTxnCoordinator
>  failed on travis with the following exception:
> {code}
> Test 
> testProducerWhenCommitEmptyPartitionsToOutdatedTxnCoordinator(org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase)
>  failed with: org.junit.runners.model.TestTimedOutException: test timed out 
> after 3 milliseconds at java.lang.Object.wait(Native Method) at 
> java.lang.Object.wait(Object.java:502) at 
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:92)
>  at 
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
>  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl.createTestTopic(KafkaTestEnvironmentImpl.java:177)
>  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironment.createTestTopic(KafkaTestEnvironment.java:115)
>  at 
> org.apache.flink.streaming.connectors.kafka.KafkaTestBase.createTestTopic(KafkaTestBase.java:197)
>  at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.testProducerWhenCommitEmptyPartitionsToOutdatedTxnCoordinator(FlinkKafkaInternalProducerITCase.java:176)
>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>  at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
> java.lang.Thread.run(Thread.java:748)
> {code}
> instance: [https://api.travis-ci.org/v3/job/633307060/log.txt]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #16792: [FLINK-23739][table]BlackHoleSink & PrintSink implement SupportsParti…

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #16792:
URL: https://github.com/apache/flink/pull/16792#issuecomment-897629413


   
   ## CI report:
   
   * 7c7c6b78520a4daa62d0844c063fd68e9bcc9387 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22369)
 
   * 1ef8b6db2cdc1a43002088892bfea6eb2fcf238a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24239)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shiwuliang updated FLINK-24314:
---
Description: 
When I config to use `RocksDBStatebackend`, 

 
{code:java}
//代码占位符
RocksDBStateBackend rocksDBStateBackend = new 
RocksDBStateBackend("hdfs://");
streamEnv.setStateBackend(rocksDBStateBackend);{code}
 

there are some exception like this:

!image-2021-09-17-10-59-50-094.png|width=1446,height=420!

 

Seems like the RocksdbStatebackend will use FsStateBackend to store 
checkpoints. So it means that I have used FileSystemStateBackend, why does this 
exception occur?

 

I use flink 1.13.0 and found a similar question like this at: 
[https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]

 

I'm not sure if his question is same with mine.

I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can I 
bypass it besides upgrading?

 

  was:
When I config to use `RocksDBStatebackend`, 

 

```

RocksDBStateBackend rocksDBStateBackend = new 
RocksDBStateBackend("hdfs://");
streamEnv.setStateBackend(rocksDBStateBackend);

```

 

there are some exception like this:

!image-2021-09-17-10-59-50-094.png|width=1446,height=420!

 

Seems like the RocksdbStatebackend will use FsStateBackend to store 
checkpoints. So it means that I have used FileSystemStateBackend, why does this 
exception occur?

 

I use flink 1.13.0 and found a similar question like this at: 
[https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]

 

I'm not sure if his question is same with mine.

I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can I 
bypass it besides upgrading?

 


> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png
>
>
> When I config to use `RocksDBStatebackend`, 
>  
> {code:java}
> //代码占位符
> RocksDBStateBackend rocksDBStateBackend = new 
> RocksDBStateBackend("hdfs://");
> streamEnv.setStateBackend(rocksDBStateBackend);{code}
>  
> there are some exception like this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> Seems like the RocksdbStatebackend will use FsStateBackend to store 
> checkpoints. So it means that I have used FileSystemStateBackend, why does 
> this exception occur?
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-23271) RuntimeException: while resolving method 'booleanValue' in class class java.math.BigDecimal

2021-09-16 Thread xuyangzhong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-23271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416419#comment-17416419
 ] 

xuyangzhong commented on FLINK-23271:
-

link: https://issues.apache.org/jira/browse/CALCITE-4777

> RuntimeException: while resolving method 'booleanValue' in class class 
> java.math.BigDecimal
> ---
>
> Key: FLINK-23271
> URL: https://issues.apache.org/jira/browse/FLINK-23271
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.14.0
>Reporter: xiaojin.wy
>Priority: Major
>
> *Sql :*
> CREATE TABLE database3_t0(
> c0 DECIMAL , c1 SMALLINT
> ) WITH (
>  'connector' = 'filesystem',
>  'path' = 'hdfs:///tmp/database3_t0.csv',
>  'format' = 'csv' 
> );
> INSERT OVERWRITE database8_t0(c0, c1) VALUES(2113554022, cast(-22975 as 
> SMALLINT)), (1570419395, cast(-26858 as SMALLINT)), (-1569861129, cast(-20143 
> as SMALLINT));
> SELECT database8_t0.c0 AS ref0 FROM database8_t0 WHERE CAST 
> (0.10915913549909961 AS BOOLEAN;
> *After excuting the sql, you will find the error:*
> java.lang.RuntimeException: while resolving method 'booleanValue' in class 
> class java.math.BigDecimal
>   at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:424)
>   at org.apache.calcite.linq4j.tree.Expressions.call(Expressions.java:435)
>   at 
> org.apache.calcite.linq4j.tree.Expressions.unbox(Expressions.java:1453)
>   at 
> org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:398)
>   at 
> org.apache.calcite.adapter.enumerable.EnumUtils.convert(EnumUtils.java:326)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateCast(RexToLixTranslator.java:538)
>   at 
> org.apache.calcite.adapter.enumerable.RexImpTable$CastImplementor.implementSafe(RexImpTable.java:2450)
>   at 
> org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.genValueStatement(RexImpTable.java:2894)
>   at 
> org.apache.calcite.adapter.enumerable.RexImpTable$AbstractRexCallImplementor.implement(RexImpTable.java:2859)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:1084)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitCall(RexToLixTranslator.java:90)
>   at org.apache.calcite.rex.RexCall.accept(RexCall.java:174)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:970)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslator.visitLocalRef(RexToLixTranslator.java:90)
>   at org.apache.calcite.rex.RexLocalRef.accept(RexLocalRef.java:75)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:237)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslator.translate(RexToLixTranslator.java:231)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateList(RexToLixTranslator.java:818)
>   at 
> org.apache.calcite.adapter.enumerable.RexToLixTranslator.translateProjects(RexToLixTranslator.java:198)
>   at 
> org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:90)
>   at 
> org.apache.calcite.rex.RexExecutorImpl.compile(RexExecutorImpl.java:66)
>   at 
> org.apache.calcite.rex.RexExecutorImpl.reduce(RexExecutorImpl.java:128)
>   at 
> org.apache.calcite.rex.RexSimplify.simplifyCast(RexSimplify.java:2101)
>   at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:326)
>   at 
> org.apache.calcite.rex.RexSimplify.simplifyUnknownAs(RexSimplify.java:287)
>   at org.apache.calcite.rex.RexSimplify.simplify(RexSimplify.java:262)
>   at 
> org.apache.flink.table.planner.plan.utils.FlinkRexUtil$.simplify(FlinkRexUtil.scala:224)
>   at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.simplify(SimplifyFilterConditionRule.scala:63)
>   at 
> org.apache.flink.table.planner.plan.rules.logical.SimplifyFilterConditionRule.onMatch(SimplifyFilterConditionRule.scala:46)
>   at 
> org.apache.calcite.plan.AbstractRelOptPlanner.fireRule(AbstractRelOptPlanner.java:333)
>   at org.apache.calcite.plan.hep.HepPlanner.applyRule(HepPlanner.java:542)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.applyRules(HepPlanner.java:407)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeInstruction(HepPlanner.java:243)
>   at 
> org.apache.calcite.plan.hep.HepInstruction$RuleInstance.execute(HepInstruction.java:127)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.executeProgram(HepPlanner.java:202)
>   at 
> org.apache.calcite.plan.hep.HepPlanner.findBestExp(HepPlanner.java:189)
>   at 
> 

[jira] [Updated] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shiwuliang updated FLINK-24314:
---
Description: 
When I config to use `RocksDBStatebackend`, 

 

```

RocksDBStateBackend rocksDBStateBackend = new 
RocksDBStateBackend("hdfs://");
streamEnv.setStateBackend(rocksDBStateBackend);

```

 

there are some exception like this:

!image-2021-09-17-10-59-50-094.png|width=1446,height=420!

 

Seems like the RocksdbStatebackend will use FsStateBackend to store 
checkpoints. So it means that I have used FileSystemStateBackend, why does this 
exception occur?

 

I use flink 1.13.0 and found a similar question like this at: 
[https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]

 

I'm not sure if his question is same with mine.

I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can I 
bypass it besides upgrading?

 

  was:
When I config to use `RocksDBStatebackend`, 

 

there are some exception like this:

!image-2021-09-17-10-59-50-094.png|width=1446,height=420!

 

Seems like the RocksdbStatebackend will use FsStateBackend to store 
checkpoints. So it means that I have used FileSystemStateBackend, why does this 
exception occur?

 

I use flink 1.13.0 and found a similar question like this at: 
[https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]

 

I'm not sure if his question is same with mine.

I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can I 
bypass it besides upgrading?

 


> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png
>
>
> When I config to use `RocksDBStatebackend`, 
>  
> ```
> RocksDBStateBackend rocksDBStateBackend = new 
> RocksDBStateBackend("hdfs://");
> streamEnv.setStateBackend(rocksDBStateBackend);
> ```
>  
> there are some exception like this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> Seems like the RocksdbStatebackend will use FsStateBackend to store 
> checkpoints. So it means that I have used FileSystemStateBackend, why does 
> this exception occur?
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shiwuliang updated FLINK-24314:
---
Description: 
When I config to use `RocksDBStatebackend`, 

 

there are some exception like this:

!image-2021-09-17-10-59-50-094.png|width=1446,height=420!

 

Seems like the RocksdbStatebackend will use FsStateBackend to store 
checkpoints. So it means that I have used FileSystemStateBackend, why does this 
exception occur?

 

I use flink 1.13.0 and found a similar question like this at: 
[https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]

 

I'm not sure if his question is same with mine.

I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can I 
bypass it besides upgrading?

 

  was:
When I config to use `RocksDBStatebackend`,  there are some exception like this:

!image-2021-09-17-10-59-50-094.png|width=1446,height=420!

 

I use flink 1.13.0 and found a similar question like this at: 
[https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]

 

I'm not sure if his question is same with mine.

I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can I 
bypass it besides upgrading?

 


> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png
>
>
> When I config to use `RocksDBStatebackend`, 
>  
> there are some exception like this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> Seems like the RocksdbStatebackend will use FsStateBackend to store 
> checkpoints. So it means that I have used FileSystemStateBackend, why does 
> this exception occur?
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shiwuliang updated FLINK-24314:
---
Attachment: (was: image-2021-09-17-11-05-16-662.png)

> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png
>
>
> When I config to use `RocksDBStatebackend`,  there are some exception like 
> this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shiwuliang updated FLINK-24314:
---
 Attachment: image-2021-09-17-11-05-16-662.png
Description: 
When I config to use `RocksDBStatebackend`,  there are some exception like this:

!image-2021-09-17-10-59-50-094.png|width=1446,height=420!

 

I use flink 1.13.0 and found a similar question like this at: 
[https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]

 

I'm not sure if his question is same with mine.

I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can I 
bypass it besides upgrading?

 

  was:!image-2021-09-17-10-59-50-094.png|width=1446,height=420!


> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png, 
> image-2021-09-17-11-05-16-662.png
>
>
> When I config to use `RocksDBStatebackend`,  there are some exception like 
> this:
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!
>  
> I use flink 1.13.0 and found a similar question like this at: 
> [https://stackoverflow.com/questions/68314652/flink-state-backend-config-with-the-state-processor-api]
>  
> I'm not sure if his question is same with mine.
> I want to know how can I solve this and if it is indeed a 1.13.0 bug, how can 
> I bypass it besides upgrading?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

shiwuliang updated FLINK-24314:
---
Description: !image-2021-09-17-10-59-50-094.png|width=1446,height=420!  
(was: !image-2021-09-17-10-59-50-094.png!)

> Always use memory state backend with RocksDB
> 
>
> Key: FLINK-24314
> URL: https://issues.apache.org/jira/browse/FLINK-24314
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.13.0
>Reporter: shiwuliang
>Priority: Major
> Attachments: image-2021-09-17-10-59-50-094.png
>
>
> !image-2021-09-17-10-59-50-094.png|width=1446,height=420!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-24314) Always use memory state backend with RocksDB

2021-09-16 Thread shiwuliang (Jira)
shiwuliang created FLINK-24314:
--

 Summary: Always use memory state backend with RocksDB
 Key: FLINK-24314
 URL: https://issues.apache.org/jira/browse/FLINK-24314
 Project: Flink
  Issue Type: Bug
Affects Versions: 1.13.0
Reporter: shiwuliang
 Attachments: image-2021-09-17-10-59-50-094.png

!image-2021-09-17-10-59-50-094.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #17307: [BP-1.12][FLINK-24277][connector/kafka] Add configuration for committing offset on checkpoint and disable it if group ID is not specified

2021-09-16 Thread GitBox


flinkbot commented on pull request #17307:
URL: https://github.com/apache/flink/pull/17307#issuecomment-921415679


   
   ## CI report:
   
   * 2d451af80e6af332a6f0d336e47244e59a790d75 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17306: [BP-1.13][FLINK-24277][connector/kafka] Add configuration for committing offset on checkpoint and disable it if group ID is not specified

2021-09-16 Thread GitBox


flinkbot commented on pull request #17306:
URL: https://github.com/apache/flink/pull/17306#issuecomment-921415622


   
   ## CI report:
   
   * 67cc66fea096aae34129fb36bfff203e2b96b3eb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17205: [FLINK-24168][table-planner] Update MATCH_ROWTIME function which could receive 0 argument or 1 argument

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17205:
URL: https://github.com/apache/flink/pull/17205#issuecomment-915449441


   
   ## CI report:
   
   * c8ea9552f00c97efc8cbb3e19fa23b89b8c69772 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24181)
 
   * 5e5c83a827211b9a5d8337d58bcf352c7fc37a4f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16853: [FLINK-23827][table-planner] Fix ModifiedMonotonicity inference for s…

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #16853:
URL: https://github.com/apache/flink/pull/16853#issuecomment-900037847


   
   ## CI report:
   
   * 0cc6f3f9235ffe069ca7eb7feac3e641cbe3cb22 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24238)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24186)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16792: [FLINK-23739][table]BlackHoleSink & PrintSink implement SupportsParti…

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #16792:
URL: https://github.com/apache/flink/pull/16792#issuecomment-897629413


   
   ## CI report:
   
   * 7c7c6b78520a4daa62d0844c063fd68e9bcc9387 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=22369)
 
   * 1ef8b6db2cdc1a43002088892bfea6eb2fcf238a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] beyond1920 commented on pull request #17205: [FLINK-24168][table-planner] Update MATCH_ROWTIME function which could receive 0 argument or 1 argument

2021-09-16 Thread GitBox


beyond1920 commented on pull request #17205:
URL: https://github.com/apache/flink/pull/17205#issuecomment-921409748


   @godfreyhe Thanks very much for review. I've updated the pr, please have a 
look. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16853: [FLINK-23827][table-planner] Fix ModifiedMonotonicity inference for s…

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #16853:
URL: https://github.com/apache/flink/pull/16853#issuecomment-900037847


   
   ## CI report:
   
   * 0cc6f3f9235ffe069ca7eb7feac3e641cbe3cb22 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24186)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24238)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on pull request #16853: [FLINK-23827][table-planner] Fix ModifiedMonotonicity inference for s…

2021-09-16 Thread GitBox


godfreyhe commented on pull request #16853:
URL: https://github.com/apache/flink/pull/16853#issuecomment-921406598


   @flinkbot run azure


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17306: [BP-1.13][FLINK-24277][connector/kafka] And configuration for committing offset on checkpoint and disable it if group ID is not specified

2021-09-16 Thread GitBox


flinkbot commented on pull request #17306:
URL: https://github.com/apache/flink/pull/17306#issuecomment-921406362


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 67cc66fea096aae34129fb36bfff203e2b96b3eb (Fri Sep 17 
02:27:59 UTC 2021)
   
   **Warnings:**
* Documentation files were touched, but no `docs/content.zh/` files: Update 
Chinese documentation or file Jira ticket.
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-24277).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17307: [BP-1.12][FLINK-24277][connector/kafka] And configuration for committing offset on checkpoint and disable it if group ID is not specified

2021-09-16 Thread GitBox


flinkbot commented on pull request #17307:
URL: https://github.com/apache/flink/pull/17307#issuecomment-921406349


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 2d451af80e6af332a6f0d336e47244e59a790d75 (Fri Sep 17 
02:27:57 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-24277).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] PatrickRen opened a new pull request #17307: [BP-1.12][FLINK-24277][connector/kafka] And configuration for committing offset on checkpoint and disable it if group ID is not specified

2021-09-16 Thread GitBox


PatrickRen opened a new pull request #17307:
URL: https://github.com/apache/flink/pull/17307


   Unchanged back-port of #17276 on release-1.12


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] PatrickRen opened a new pull request #17306: [BP-1.13][FLINK-24277][connector/kafka] And configuration for committing offset on checkpoint and disable it if group ID is not specified

2021-09-16 Thread GitBox


PatrickRen opened a new pull request #17306:
URL: https://github.com/apache/flink/pull/17306


   Unchanged back-port of #17276 on release-1.13


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Issue Comment Deleted] (FLINK-11250) fix thread leaked when StreamTask switched from DEPLOYING to CANCELING

2021-09-16 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei updated FLINK-11250:
-
Comment: was deleted

(was: merged commit 
[{{3b6b522}}|https://github.com/apache/flink/commit/3b6b5229df09c05fbeeaf88bf91df6d18c71097a]
 into apache:master)

> fix thread leaked when StreamTask switched from DEPLOYING to CANCELING
> --
>
> Key: FLINK-11250
> URL: https://issues.apache.org/jira/browse/FLINK-11250
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.5.6, 1.6.3, 1.7.1
>Reporter: lamber-ken
>Assignee: Anton Kalashnikov
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.7.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> begin flink-1.5.x version, streamRecordWriters was created in StreamTask's 
> constructor, which start OutputFlusher daemon thread. so when task switched 
> from DEPLOYING to CANCELING state, the daemon thread will be leaked.
>  
> *reproducible example*
> {code:java}
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.enableCheckpointing(5000);
> env
> .addSource(new SourceFunction() {
> @Override
> public void run(SourceContext ctx) throws Exception {
> for (int i = 0; i < 1; i++) {
> Thread.sleep(100);
> ctx.collect("data " + i);
> }
> }
> @Override
> public void cancel() {
> }
> })
> .addSink(new RichSinkFunction() {
> @Override
> public void open(Configuration parameters) throws Exception {
> System.out.println(1 / 0);
> }
> @Override
> public void invoke(String value, Context context) throws 
> Exception {
> }
> }).setParallelism(2);
> env.execute();
> }{code}
> *some useful log*
> {code:java}
> 2019-01-02 03:03:47.525 [thread==> jobmanager-future-thread-2] 
> executiongraph.Execution#transitionState:1316 Source: Custom Source (1/1) 
> (74a4ed4bb2f80aa2b98e11bd09ea64ef) switched from CREATED to SCHEDULED.
> 2019-01-02 03:03:47.526 [thread==> flink-akka.actor.default-dispatcher-5] 
> slotpool.SlotPool#allocateSlot:326 Received slot request 
> [SlotRequestId{12bfcf1674f5b96567a076086dbbfd1b}] for task: Attempt #1 
> (Source: Custom Source (1/1)) @ (unassigned) - [SCHEDULED]
> 2019-01-02 03:03:47.527 [thread==> flink-akka.actor.default-dispatcher-5] 
> slotpool.SlotSharingManager#createRootSlot:151 Create multi task slot 
> [SlotRequestId{494e47eb8318e2c0a1db91dda6b8}] in slot 
> [SlotRequestId{6d7f0173c1d48e5559f6a14b080ee817}].
> 2019-01-02 03:03:47.527 [thread==> flink-akka.actor.default-dispatcher-5] 
> slotpool.SlotSharingManager$MultiTaskSlot#allocateSingleTaskSlot:426 Create 
> single task slot [SlotRequestId{12bfcf1674f5b96567a076086dbbfd1b}] in multi 
> task slot [SlotRequestId{494e47eb8318e2c0a1db91dda6b8}] for group 
> bc764cd8ddf7a0cff126f51c16239658.
> 2019-01-02 03:03:47.528 [thread==> flink-akka.actor.default-dispatcher-2] 
> slotpool.SlotSharingManager$MultiTaskSlot#allocateSingleTaskSlot:426 Create 
> single task slot [SlotRequestId{8a877431375df8aeadb2fd845cae15fc}] in multi 
> task slot [SlotRequestId{494e47eb8318e2c0a1db91dda6b8}] for group 
> 0a448493b4782967b150582570326227.
> 2019-01-02 03:03:47.528 [thread==> flink-akka.actor.default-dispatcher-2] 
> slotpool.SlotSharingManager#createRootSlot:151 Create multi task slot 
> [SlotRequestId{56a36d3902ee1a7d0e2e84f50039c1ca}] in slot 
> [SlotRequestId{dbf5c9fa39f1e5a0b34a4a8c10699ee5}].
> 2019-01-02 03:03:47.528 [thread==> flink-akka.actor.default-dispatcher-2] 
> slotpool.SlotSharingManager$MultiTaskSlot#allocateSingleTaskSlot:426 Create 
> single task slot [SlotRequestId{5929c12b52dccee682f86afbe1cff5cf}] in multi 
> task slot [SlotRequestId{56a36d3902ee1a7d0e2e84f50039c1ca}] for group 
> 0a448493b4782967b150582570326227.
> 2019-01-02 03:03:47.529 [thread==> flink-akka.actor.default-dispatcher-5] 
> executiongraph.Execution#transitionState:1316 Source: Custom Source (1/1) 
> (74a4ed4bb2f80aa2b98e11bd09ea64ef) switched from SCHEDULED to DEPLOYING.
> 2019-01-02 03:03:47.529 [thread==> flink-akka.actor.default-dispatcher-5] 
> executiongraph.Execution#deploy:576 Deploying Source: Custom Source (1/1) 
> (attempt #1) to localhost
> 2019-01-02 03:03:47.530 [thread==> flink-akka.actor.default-dispatcher-2] 
> 

[jira] [Resolved] (FLINK-11250) fix thread leaked when StreamTask switched from DEPLOYING to CANCELING

2021-09-16 Thread Yuan Mei (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-11250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuan Mei resolved FLINK-11250.
--
Resolution: Fixed

merged commit 
[{{3b6b522}}|https://github.com/apache/flink/commit/3b6b5229df09c05fbeeaf88bf91df6d18c71097a]
 into apache:master

> fix thread leaked when StreamTask switched from DEPLOYING to CANCELING
> --
>
> Key: FLINK-11250
> URL: https://issues.apache.org/jira/browse/FLINK-11250
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.5.6, 1.6.3, 1.7.1
>Reporter: lamber-ken
>Assignee: Anton Kalashnikov
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.7.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> begin flink-1.5.x version, streamRecordWriters was created in StreamTask's 
> constructor, which start OutputFlusher daemon thread. so when task switched 
> from DEPLOYING to CANCELING state, the daemon thread will be leaked.
>  
> *reproducible example*
> {code:java}
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.enableCheckpointing(5000);
> env
> .addSource(new SourceFunction() {
> @Override
> public void run(SourceContext ctx) throws Exception {
> for (int i = 0; i < 1; i++) {
> Thread.sleep(100);
> ctx.collect("data " + i);
> }
> }
> @Override
> public void cancel() {
> }
> })
> .addSink(new RichSinkFunction() {
> @Override
> public void open(Configuration parameters) throws Exception {
> System.out.println(1 / 0);
> }
> @Override
> public void invoke(String value, Context context) throws 
> Exception {
> }
> }).setParallelism(2);
> env.execute();
> }{code}
> *some useful log*
> {code:java}
> 2019-01-02 03:03:47.525 [thread==> jobmanager-future-thread-2] 
> executiongraph.Execution#transitionState:1316 Source: Custom Source (1/1) 
> (74a4ed4bb2f80aa2b98e11bd09ea64ef) switched from CREATED to SCHEDULED.
> 2019-01-02 03:03:47.526 [thread==> flink-akka.actor.default-dispatcher-5] 
> slotpool.SlotPool#allocateSlot:326 Received slot request 
> [SlotRequestId{12bfcf1674f5b96567a076086dbbfd1b}] for task: Attempt #1 
> (Source: Custom Source (1/1)) @ (unassigned) - [SCHEDULED]
> 2019-01-02 03:03:47.527 [thread==> flink-akka.actor.default-dispatcher-5] 
> slotpool.SlotSharingManager#createRootSlot:151 Create multi task slot 
> [SlotRequestId{494e47eb8318e2c0a1db91dda6b8}] in slot 
> [SlotRequestId{6d7f0173c1d48e5559f6a14b080ee817}].
> 2019-01-02 03:03:47.527 [thread==> flink-akka.actor.default-dispatcher-5] 
> slotpool.SlotSharingManager$MultiTaskSlot#allocateSingleTaskSlot:426 Create 
> single task slot [SlotRequestId{12bfcf1674f5b96567a076086dbbfd1b}] in multi 
> task slot [SlotRequestId{494e47eb8318e2c0a1db91dda6b8}] for group 
> bc764cd8ddf7a0cff126f51c16239658.
> 2019-01-02 03:03:47.528 [thread==> flink-akka.actor.default-dispatcher-2] 
> slotpool.SlotSharingManager$MultiTaskSlot#allocateSingleTaskSlot:426 Create 
> single task slot [SlotRequestId{8a877431375df8aeadb2fd845cae15fc}] in multi 
> task slot [SlotRequestId{494e47eb8318e2c0a1db91dda6b8}] for group 
> 0a448493b4782967b150582570326227.
> 2019-01-02 03:03:47.528 [thread==> flink-akka.actor.default-dispatcher-2] 
> slotpool.SlotSharingManager#createRootSlot:151 Create multi task slot 
> [SlotRequestId{56a36d3902ee1a7d0e2e84f50039c1ca}] in slot 
> [SlotRequestId{dbf5c9fa39f1e5a0b34a4a8c10699ee5}].
> 2019-01-02 03:03:47.528 [thread==> flink-akka.actor.default-dispatcher-2] 
> slotpool.SlotSharingManager$MultiTaskSlot#allocateSingleTaskSlot:426 Create 
> single task slot [SlotRequestId{5929c12b52dccee682f86afbe1cff5cf}] in multi 
> task slot [SlotRequestId{56a36d3902ee1a7d0e2e84f50039c1ca}] for group 
> 0a448493b4782967b150582570326227.
> 2019-01-02 03:03:47.529 [thread==> flink-akka.actor.default-dispatcher-5] 
> executiongraph.Execution#transitionState:1316 Source: Custom Source (1/1) 
> (74a4ed4bb2f80aa2b98e11bd09ea64ef) switched from SCHEDULED to DEPLOYING.
> 2019-01-02 03:03:47.529 [thread==> flink-akka.actor.default-dispatcher-5] 
> executiongraph.Execution#deploy:576 Deploying Source: Custom Source (1/1) 
> (attempt #1) to localhost
> 2019-01-02 03:03:47.530 [thread==> flink-akka.actor.default-dispatcher-2] 
> 

[jira] [Commented] (FLINK-11250) fix thread leaked when StreamTask switched from DEPLOYING to CANCELING

2021-09-16 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416394#comment-17416394
 ] 

Yuan Mei commented on FLINK-11250:
--

merged commit 
[{{3b6b522}}|https://github.com/apache/flink/commit/3b6b5229df09c05fbeeaf88bf91df6d18c71097a]
 into apache:master

> fix thread leaked when StreamTask switched from DEPLOYING to CANCELING
> --
>
> Key: FLINK-11250
> URL: https://issues.apache.org/jira/browse/FLINK-11250
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.5.6, 1.6.3, 1.7.1
>Reporter: lamber-ken
>Assignee: Anton Kalashnikov
>Priority: Minor
>  Labels: auto-deprioritized-major, auto-unassigned, 
> pull-request-available
> Fix For: 1.7.3
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> begin flink-1.5.x version, streamRecordWriters was created in StreamTask's 
> constructor, which start OutputFlusher daemon thread. so when task switched 
> from DEPLOYING to CANCELING state, the daemon thread will be leaked.
>  
> *reproducible example*
> {code:java}
> public static void main(String[] args) throws Exception {
> StreamExecutionEnvironment env = 
> StreamExecutionEnvironment.getExecutionEnvironment();
> env.enableCheckpointing(5000);
> env
> .addSource(new SourceFunction() {
> @Override
> public void run(SourceContext ctx) throws Exception {
> for (int i = 0; i < 1; i++) {
> Thread.sleep(100);
> ctx.collect("data " + i);
> }
> }
> @Override
> public void cancel() {
> }
> })
> .addSink(new RichSinkFunction() {
> @Override
> public void open(Configuration parameters) throws Exception {
> System.out.println(1 / 0);
> }
> @Override
> public void invoke(String value, Context context) throws 
> Exception {
> }
> }).setParallelism(2);
> env.execute();
> }{code}
> *some useful log*
> {code:java}
> 2019-01-02 03:03:47.525 [thread==> jobmanager-future-thread-2] 
> executiongraph.Execution#transitionState:1316 Source: Custom Source (1/1) 
> (74a4ed4bb2f80aa2b98e11bd09ea64ef) switched from CREATED to SCHEDULED.
> 2019-01-02 03:03:47.526 [thread==> flink-akka.actor.default-dispatcher-5] 
> slotpool.SlotPool#allocateSlot:326 Received slot request 
> [SlotRequestId{12bfcf1674f5b96567a076086dbbfd1b}] for task: Attempt #1 
> (Source: Custom Source (1/1)) @ (unassigned) - [SCHEDULED]
> 2019-01-02 03:03:47.527 [thread==> flink-akka.actor.default-dispatcher-5] 
> slotpool.SlotSharingManager#createRootSlot:151 Create multi task slot 
> [SlotRequestId{494e47eb8318e2c0a1db91dda6b8}] in slot 
> [SlotRequestId{6d7f0173c1d48e5559f6a14b080ee817}].
> 2019-01-02 03:03:47.527 [thread==> flink-akka.actor.default-dispatcher-5] 
> slotpool.SlotSharingManager$MultiTaskSlot#allocateSingleTaskSlot:426 Create 
> single task slot [SlotRequestId{12bfcf1674f5b96567a076086dbbfd1b}] in multi 
> task slot [SlotRequestId{494e47eb8318e2c0a1db91dda6b8}] for group 
> bc764cd8ddf7a0cff126f51c16239658.
> 2019-01-02 03:03:47.528 [thread==> flink-akka.actor.default-dispatcher-2] 
> slotpool.SlotSharingManager$MultiTaskSlot#allocateSingleTaskSlot:426 Create 
> single task slot [SlotRequestId{8a877431375df8aeadb2fd845cae15fc}] in multi 
> task slot [SlotRequestId{494e47eb8318e2c0a1db91dda6b8}] for group 
> 0a448493b4782967b150582570326227.
> 2019-01-02 03:03:47.528 [thread==> flink-akka.actor.default-dispatcher-2] 
> slotpool.SlotSharingManager#createRootSlot:151 Create multi task slot 
> [SlotRequestId{56a36d3902ee1a7d0e2e84f50039c1ca}] in slot 
> [SlotRequestId{dbf5c9fa39f1e5a0b34a4a8c10699ee5}].
> 2019-01-02 03:03:47.528 [thread==> flink-akka.actor.default-dispatcher-2] 
> slotpool.SlotSharingManager$MultiTaskSlot#allocateSingleTaskSlot:426 Create 
> single task slot [SlotRequestId{5929c12b52dccee682f86afbe1cff5cf}] in multi 
> task slot [SlotRequestId{56a36d3902ee1a7d0e2e84f50039c1ca}] for group 
> 0a448493b4782967b150582570326227.
> 2019-01-02 03:03:47.529 [thread==> flink-akka.actor.default-dispatcher-5] 
> executiongraph.Execution#transitionState:1316 Source: Custom Source (1/1) 
> (74a4ed4bb2f80aa2b98e11bd09ea64ef) switched from SCHEDULED to DEPLOYING.
> 2019-01-02 03:03:47.529 [thread==> flink-akka.actor.default-dispatcher-5] 
> executiongraph.Execution#deploy:576 Deploying Source: Custom Source (1/1) 
> (attempt #1) to localhost
> 2019-01-02 03:03:47.530 [thread==> flink-akka.actor.default-dispatcher-2] 
> 

[GitHub] [flink] curcur merged pull request #17187: [FLINK-11250][runtime] Added method init for RecordWriter for initialization resources(OutputFlusher) outside of constructor

2021-09-16 Thread GitBox


curcur merged pull request #17187:
URL: https://github.com/apache/flink/pull/17187


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on pull request #17187: [FLINK-11250][runtime] Added method init for RecordWriter for initialization resources(OutputFlusher) outside of constructor

2021-09-16 Thread GitBox


curcur commented on pull request #17187:
URL: https://github.com/apache/flink/pull/17187#issuecomment-921401303


   Thanks @akalash for fixing
   The failure does not seem to be related.
   
   merged.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Myasuka commented on a change in pull request #17203: [FLINK-22944][state] Optimize writing state changelog

2021-09-16 Thread GitBox


Myasuka commented on a change in pull request #17203:
URL: https://github.com/apache/flink/pull/17203#discussion_r710671588



##
File path: 
flink-state-backends/flink-statebackend-changelog/src/main/java/org/apache/flink/state/changelog/restore/ChangelogBackendRestoreOperation.java
##
@@ -89,14 +82,11 @@
 ChangelogKeyedStateBackend backend,
 ChangelogStateBackendHandle backendHandle,
 StateChangelogHandleReader changelogHandleReader,
-ClassLoader classLoader,
-Map> metadataByBackend)
+ClassLoader classLoader)
 throws Exception {
+Map stateIds = new HashMap<>();

Review comment:
   The equivalent change should be creating the stateIds within the for 
loop, otherwise, the stateIds would contain short state id created by other 
state backends. Though we would update the state id during meta restore, I'm 
afraid that this might not be safe if we change implementation of updating the 
meta in the future.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-23180) Initialize checkpoint location lazily in DataStream Batch Jobs

2021-09-16 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang updated FLINK-23180:
-
Fix Version/s: 1.15.0

> Initialize checkpoint location lazily in DataStream Batch Jobs
> --
>
> Key: FLINK-23180
> URL: https://issues.apache.org/jira/browse/FLINK-23180
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.15.0
>
>
> Currently batch jobs will initialize checkpoint location eagerly when 
> {{CheckpointCoordinator}} is created, which will create lots of useless 
> directories on distributed filesystem. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-23180) Initialize checkpoint location lazily in DataStream Batch Jobs

2021-09-16 Thread Yun Tang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Tang resolved FLINK-23180.
--
Release Note: 
merged in master:
164a59ac1bb4dd39e6532478c30234eeafd76cd0
  Resolution: Fixed

> Initialize checkpoint location lazily in DataStream Batch Jobs
> --
>
> Key: FLINK-23180
> URL: https://issues.apache.org/jira/browse/FLINK-23180
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Jiayi Liao
>Assignee: Jiayi Liao
>Priority: Major
>  Labels: pull-request-available
>
> Currently batch jobs will initialize checkpoint location eagerly when 
> {{CheckpointCoordinator}} is created, which will create lots of useless 
> directories on distributed filesystem. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Myasuka merged pull request #17151: [FLINK-23180] Do not initialize checkpoint base locations when checkp…

2021-09-16 Thread GitBox


Myasuka merged pull request #17151:
URL: https://github.com/apache/flink/pull/17151


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi closed pull request #11281: [FLINK-16203][sql] Support JSON_OBJECT for blink planner

2021-09-16 Thread GitBox


JingsongLi closed pull request #11281:
URL: https://github.com/apache/flink/pull/11281


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-16203) Support JSON_OBJECT

2021-09-16 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee closed FLINK-16203.

Fix Version/s: 1.15.0
   Resolution: Fixed

Implemented via master:
4090a066ebd75e12265601c329aaf054beead165
cd08b4b0c3203e336f3767b1902faf472fcbbe7e
f0be546fea64cb8b07ac0e20abbff70ea62dec2f
463a7d4f3c9d9a62ba2ef48d2a3354ffd835383e
7a6bca83a9738b9f3db872f46fe132fc29b635c2
0fb8db5d1a2b962830dd006f33a49f54e21cdb5c
b30fdef064f345b85a940bd904aeb105e9d0b1ea
c64324187df5fd0c45b5424a5ff310e45da4c537

> Support JSON_OBJECT
> ---
>
> Key: FLINK-16203
> URL: https://issues.apache.org/jira/browse/FLINK-16203
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / API
>Reporter: Zili Chen
>Assignee: Ingo Bürk
>Priority: Major
>  Labels: auto-unassigned, pull-request-available
> Fix For: 1.15.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi closed pull request #17186: [FLINK-16203][table] Support JSON_OBJECT()

2021-09-16 Thread GitBox


JingsongLi closed pull request #17186:
URL: https://github.com/apache/flink/pull/17186


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24313) JdbcCatalogFactoryTest fails due to "Gave up waiting for server to start after 10000ms"

2021-09-16 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-24313:
-
Labels: test-stability  (was: )

> JdbcCatalogFactoryTest fails due to "Gave up waiting for server to start 
> after 1ms"
> ---
>
> Key: FLINK-24313
> URL: https://issues.apache.org/jira/browse/FLINK-24313
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / JDBC
>Affects Versions: 1.14.0
>Reporter: Xintong Song
>Priority: Major
>  Labels: test-stability
> Fix For: 1.14.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24212=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=14018
> {code}
> Sep 16 14:07:22 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, 
> Time elapsed: 14.065 s <<< FAILURE! - in 
> org.apache.flink.connector.jdbc.catalog.factory.JdbcCatalogFactoryTest
> Sep 16 14:07:22 [ERROR] 
> org.apache.flink.connector.jdbc.catalog.factory.JdbcCatalogFactoryTest  Time 
> elapsed: 14.065 s  <<< ERROR!
> Sep 16 14:07:22 java.io.IOException: Gave up waiting for server to start 
> after 1ms
> Sep 16 14:07:22   at 
> com.opentable.db.postgres.embedded.EmbeddedPostgres.waitForServerStartup(EmbeddedPostgres.java:308)
> Sep 16 14:07:22   at 
> com.opentable.db.postgres.embedded.EmbeddedPostgres.startPostmaster(EmbeddedPostgres.java:257)
> Sep 16 14:07:22   at 
> com.opentable.db.postgres.embedded.EmbeddedPostgres.(EmbeddedPostgres.java:146)
> Sep 16 14:07:22   at 
> com.opentable.db.postgres.embedded.EmbeddedPostgres$Builder.start(EmbeddedPostgres.java:554)
> Sep 16 14:07:22   at 
> com.opentable.db.postgres.junit.SingleInstancePostgresRule.pg(SingleInstancePostgresRule.java:46)
> Sep 16 14:07:22   at 
> com.opentable.db.postgres.junit.SingleInstancePostgresRule.before(SingleInstancePostgresRule.java:39)
> Sep 16 14:07:22   at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50)
> Sep 16 14:07:22   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> Sep 16 14:07:22   at 
> org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
> Sep 16 14:07:22   at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:413)
> Sep 16 14:07:22   at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
> Sep 16 14:07:22   at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
> Sep 16 14:07:22   at 
> org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
> Sep 16 14:07:22   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
> Sep 16 14:07:22   at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
> Sep 16 14:07:22   at 
> java.util.Iterator.forEachRemaining(Iterator.java:116)
> Sep 16 14:07:22   at 
> java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> Sep 16 14:07:22   at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
> Sep 16 14:07:22   at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
> Sep 16 14:07:22   at 
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
> Sep 16 14:07:22   at 
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
> Sep 16 14:07:22   at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> Sep 16 14:07:22   at 
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
> Sep 16 14:07:22   at 
> org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:82)
> Sep 16 14:07:22   at 
> org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:73)
> Sep 16 14:07:22   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220)
> Sep 16 14:07:22   at 
> org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188)
> Sep 16 14:07:22   at 
> org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202)
> Sep 16 14:07:22   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181)
> Sep 16 14:07:22   at 
> org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128)
> Sep 16 14:07:22   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
> Sep 16 14:07:22   at 
> org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:120)
> Sep 16 14:07:22   at 
> 

[jira] [Created] (FLINK-24313) JdbcCatalogFactoryTest fails due to "Gave up waiting for server to start after 10000ms"

2021-09-16 Thread Xintong Song (Jira)
Xintong Song created FLINK-24313:


 Summary: JdbcCatalogFactoryTest fails due to "Gave up waiting for 
server to start after 1ms"
 Key: FLINK-24313
 URL: https://issues.apache.org/jira/browse/FLINK-24313
 Project: Flink
  Issue Type: Bug
  Components: Connectors / JDBC
Affects Versions: 1.14.0
Reporter: Xintong Song
 Fix For: 1.14.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24212=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=ed165f3f-d0f6-524b-5279-86f8ee7d0e2d=14018

{code}
Sep 16 14:07:22 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
elapsed: 14.065 s <<< FAILURE! - in 
org.apache.flink.connector.jdbc.catalog.factory.JdbcCatalogFactoryTest
Sep 16 14:07:22 [ERROR] 
org.apache.flink.connector.jdbc.catalog.factory.JdbcCatalogFactoryTest  Time 
elapsed: 14.065 s  <<< ERROR!
Sep 16 14:07:22 java.io.IOException: Gave up waiting for server to start after 
1ms
Sep 16 14:07:22 at 
com.opentable.db.postgres.embedded.EmbeddedPostgres.waitForServerStartup(EmbeddedPostgres.java:308)
Sep 16 14:07:22 at 
com.opentable.db.postgres.embedded.EmbeddedPostgres.startPostmaster(EmbeddedPostgres.java:257)
Sep 16 14:07:22 at 
com.opentable.db.postgres.embedded.EmbeddedPostgres.(EmbeddedPostgres.java:146)
Sep 16 14:07:22 at 
com.opentable.db.postgres.embedded.EmbeddedPostgres$Builder.start(EmbeddedPostgres.java:554)
Sep 16 14:07:22 at 
com.opentable.db.postgres.junit.SingleInstancePostgresRule.pg(SingleInstancePostgresRule.java:46)
Sep 16 14:07:22 at 
com.opentable.db.postgres.junit.SingleInstancePostgresRule.before(SingleInstancePostgresRule.java:39)
Sep 16 14:07:22 at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:50)
Sep 16 14:07:22 at org.junit.rules.RunRules.evaluate(RunRules.java:20)
Sep 16 14:07:22 at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
Sep 16 14:07:22 at 
org.junit.runners.ParentRunner.run(ParentRunner.java:413)
Sep 16 14:07:22 at org.junit.runner.JUnitCore.run(JUnitCore.java:137)
Sep 16 14:07:22 at org.junit.runner.JUnitCore.run(JUnitCore.java:115)
Sep 16 14:07:22 at 
org.junit.vintage.engine.execution.RunnerExecutor.execute(RunnerExecutor.java:43)
Sep 16 14:07:22 at 
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183)
Sep 16 14:07:22 at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
Sep 16 14:07:22 at 
java.util.Iterator.forEachRemaining(Iterator.java:116)
Sep 16 14:07:22 at 
java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
Sep 16 14:07:22 at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
Sep 16 14:07:22 at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
Sep 16 14:07:22 at 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150)
Sep 16 14:07:22 at 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173)
Sep 16 14:07:22 at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
Sep 16 14:07:22 at 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:485)
Sep 16 14:07:22 at 
org.junit.vintage.engine.VintageTestEngine.executeAllChildren(VintageTestEngine.java:82)
Sep 16 14:07:22 at 
org.junit.vintage.engine.VintageTestEngine.execute(VintageTestEngine.java:73)
Sep 16 14:07:22 at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:220)
Sep 16 14:07:22 at 
org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:188)
Sep 16 14:07:22 at 
org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:202)
Sep 16 14:07:22 at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:181)
Sep 16 14:07:22 at 
org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128)
Sep 16 14:07:22 at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invokeAllTests(JUnitPlatformProvider.java:150)
Sep 16 14:07:22 at 
org.apache.maven.surefire.junitplatform.JUnitPlatformProvider.invoke(JUnitPlatformProvider.java:120)
Sep 16 14:07:22 at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
Sep 16 14:07:22 at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
Sep 16 14:07:22 at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
Sep 16 14:07:22 at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Sep 16 14:07:22 Caused by: 

[jira] [Created] (FLINK-24312) 'New File Sink s3 end-to-end test' fails due to timeout

2021-09-16 Thread Xintong Song (Jira)
Xintong Song created FLINK-24312:


 Summary: 'New File Sink s3 end-to-end test' fails due to timeout
 Key: FLINK-24312
 URL: https://issues.apache.org/jira/browse/FLINK-24312
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream, Connectors / FileSystem
Affects Versions: 1.12.5
Reporter: Xintong Song
 Fix For: 1.12.6


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=24194=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=ff888d9b-cd34-53cc-d90f-3e446d355529=12970



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17305: [hotfix][flink-avro] Removed unused variable

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17305:
URL: https://github.com/apache/flink/pull/17305#issuecomment-921331062


   
   ## CI report:
   
   * 278a30ff968c3d10a7f202395a52e4a3536679a1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24234)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17305: [hotfix][flink-avro] Removed unused variable

2021-09-16 Thread GitBox


flinkbot commented on pull request #17305:
URL: https://github.com/apache/flink/pull/17305#issuecomment-921331062


   
   ## CI report:
   
   * 278a30ff968c3d10a7f202395a52e4a3536679a1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17159: [FLINK-24129][connectors-pulsar] Harden TopicRangeTest.rangeCreationH…

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17159:
URL: https://github.com/apache/flink/pull/17159#issuecomment-913532879


   
   ## CI report:
   
   * ea9955b8eea862e516f4f420f4ea58d4745d6180 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23687)
 
   * 335b11341f1b220ad945f5b88795170ca71f7a8d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24233)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #17305: [hotfix][flink-avro] Removed unused variable

2021-09-16 Thread GitBox


flinkbot commented on pull request #17305:
URL: https://github.com/apache/flink/pull/17305#issuecomment-921312448


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 278a30ff968c3d10a7f202395a52e4a3536679a1 (Thu Sep 16 
22:47:51 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] mans2singh opened a new pull request #17305: [hotfix][flink-avro] Removed unused variable

2021-09-16 Thread GitBox


mans2singh opened a new pull request #17305:
URL: https://github.com/apache/flink/pull/17305


   ## What is the purpose of the change
   
   * The variable is actualSchema declared but not used.
   
   ## Brief change log
   
   * Removed unused variable.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20661) Elasticsearch6DynamicSinkITCase.testWritingDocuments test failed with ConnectException

2021-09-16 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20661?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-20661:
---
Labels: stale-major test-stability  (was: test-stability)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issues has been marked as 
Major but is unassigned and neither itself nor its Sub-Tasks have been updated 
for 60 days. I have gone ahead and added a "stale-major" to the issue". If this 
ticket is a Major, please either assign yourself or give an update. Afterwards, 
please remove the label or in 7 days the issue will be deprioritized.


> Elasticsearch6DynamicSinkITCase.testWritingDocuments test failed with 
> ConnectException
> --
>
> Key: FLINK-20661
> URL: https://issues.apache.org/jira/browse/FLINK-20661
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch
>Affects Versions: 1.11.0, 1.13.1
>Reporter: Huang Xingbo
>Priority: Major
>  Labels: stale-major, test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=10989=logs=ba53eb01-1462-56a3-8e98-0dd97fbcaab5=eb5f4d19-2d2d-5856-a4ce-acf5f904a994]
> {code:java}
> 2020-12-17T22:52:41.2992508Z [ERROR] Tests run: 4, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 38.878 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase
> 2020-12-17T22:52:41.2999076Z [ERROR] 
> testWritingDocuments(org.apache.flink.streaming.connectors.elasticsearch.table.Elasticsearch6DynamicSinkITCase)
>   Time elapsed: 16.409 s  <<< ERROR!
> 2020-12-17T22:52:41.3008441Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-12-17T22:52:41.3009290Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-12-17T22:52:41.3048924Z  at 
> org.apache.flink.client.program.PerJobMiniClusterFactory$PerJobMiniClusterJobClient.lambda$getJobExecutionResult$2(PerJobMiniClusterFactory.java:186)
> 2020-12-17T22:52:41.3058938Z  at 
> java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
> 2020-12-17T22:52:41.3067969Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
> 2020-12-17T22:52:41.3080564Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2020-12-17T22:52:41.3098938Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2020-12-17T22:52:41.3128311Z  at 
> org.apache.flink.runtime.rpc.akka.AkkaInvocationHandler.lambda$invokeRpc$0(AkkaInvocationHandler.java:229)
> 2020-12-17T22:52:41.3141102Z  at 
> java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
> 2020-12-17T22:52:41.3168389Z  at 
> java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
> 2020-12-17T22:52:41.3178382Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
> 2020-12-17T22:52:41.3179506Z  at 
> java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
> 2020-12-17T22:52:41.3180433Z  at 
> org.apache.flink.runtime.concurrent.FutureUtils$1.onComplete(FutureUtils.java:892)
> 2020-12-17T22:52:41.3181380Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:264)
> 2020-12-17T22:52:41.3182138Z  at 
> akka.dispatch.OnComplete.internal(Future.scala:261)
> 2020-12-17T22:52:41.3182903Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:191)
> 2020-12-17T22:52:41.3183893Z  at 
> akka.dispatch.japi$CallbackBridge.apply(Future.scala:188)
> 2020-12-17T22:52:41.3184690Z  at 
> scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
> 2020-12-17T22:52:41.3185566Z  at 
> org.apache.flink.runtime.concurrent.Executors$DirectExecutionContext.execute(Executors.java:74)
> 2020-12-17T22:52:41.3186546Z  at 
> scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:44)
> 2020-12-17T22:52:41.3187525Z  at 
> scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:252)
> 2020-12-17T22:52:41.3188735Z  at 
> akka.pattern.PromiseActorRef.$bang(AskSupport.scala:572)
> 2020-12-17T22:52:41.3189570Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:22)
> 2020-12-17T22:52:41.3190827Z  at 
> akka.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:21)
> 2020-12-17T22:52:41.3191576Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
> 2020-12-17T22:52:41.3192235Z  at 
> scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
> 2020-12-17T22:52:41.3192897Z  at 
> 

[jira] [Updated] (FLINK-15532) Enable strict capacity limit for memory usage for RocksDB

2021-09-16 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15532?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-15532:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> Enable strict capacity limit for memory usage for RocksDB
> -
>
> Key: FLINK-15532
> URL: https://issues.apache.org/jira/browse/FLINK-15532
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available, stale-assigned
> Fix For: 1.14.0
>
> Attachments: image-2020-10-23-14-39-45-997.png, 
> image-2020-10-23-14-41-10-584.png, image-2020-10-23-14-43-18-739.png, 
> image-2020-10-23-14-55-08-120.png
>
>
> Currently, due to the limitation of RocksDB (see 
> [issue-6247|https://github.com/facebook/rocksdb/issues/6247]), we cannot 
> create a strict-capacity-limit LRUCache which shared among rocksDB 
> instance(s).
> This issue tracks this problem and offer the ability of strict mode once we 
> could enable this feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-23739) PrintTableSink do not implement SupportsPartitioning interface

2021-09-16 Thread Flink Jira Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-23739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-23739:
---
Labels: pull-request-available stale-assigned  (was: pull-request-available)

I am the [Flink Jira Bot|https://github.com/apache/flink-jira-bot/] and I help 
the community manage its development. I see this issue is assigned but has not 
received an update in 30 days, so it has been labeled "stale-assigned".
If you are still working on the issue, please remove the label and add a 
comment updating the community on your progress.  If this issue is waiting on 
feedback, please consider this a reminder to the committer/reviewer. Flink is a 
very active project, and so we appreciate your patience.
If you are no longer working on the issue, please unassign yourself so someone 
else may work on it.


> PrintTableSink do not implement SupportsPartitioning interface
> --
>
> Key: FLINK-23739
> URL: https://issues.apache.org/jira/browse/FLINK-23739
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Affects Versions: 1.12.4
>Reporter: Xianxun Ye
>Assignee: Xianxun Ye
>Priority: Major
>  Labels: pull-request-available, stale-assigned
>
>  
> {code:java}
> //代码占位符
> tEnv.executeSql(
> "CREATE TABLE PrintTable (name STRING, score INT, da STRING, hr 
> STRING)\n"
> + "PARTITIONED BY (da, hr)"
> + "WITH (\n"
> + "  'connector' = 'print'"
> + ")");
> tEnv.executeSql("INSERT INTO PrintTable SELECT 'n1' as name, 1 as score, 
> '2021-08-12' as da, '11' as hr");
> {code}
> Now print records with a partitioned table is not supported.
> {code:java}
> //代码占位符
> Exception in thread "main" org.apache.flink.table.api.TableException: Table 
> 'default_catalog.default_database.PrintTable' is a partitioned table, but the 
> underlying DynamicTableSink doesn't implement the SupportsPartitioning 
> interface.Exception in thread "main" 
> org.apache.flink.table.api.TableException: Table 
> 'default_catalog.default_database.PrintTable' is a partitioned table, but the 
> underlying DynamicTableSink doesn't implement the SupportsPartitioning 
> interface. at 
> org.apache.flink.table.planner.sinks.DynamicSinkUtils.validatePartitioning(DynamicSinkUtils.java:345)
>  at 
> org.apache.flink.table.planner.sinks.DynamicSinkUtils.prepareDynamicSink(DynamicSinkUtils.java:260)
>  at 
> org.apache.flink.table.planner.sinks.DynamicSinkUtils.toRel(DynamicSinkUtils.java:87)
> {code}
> `org.apache.flink.table.factories.PrintTableSinkFactory$PrintSink` and 
> `org.apache.flink.table.factories.BlackHoleTableSinkFactory$BlackHoleSink` 
> shoud implement `SupportsPartitioning` interface. 
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #17159: [FLINK-24129][connectors-pulsar] Harden TopicRangeTest.rangeCreationH…

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17159:
URL: https://github.com/apache/flink/pull/17159#issuecomment-913532879


   
   ## CI report:
   
   * ea9955b8eea862e516f4f420f4ea58d4745d6180 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=23687)
 
   * 335b11341f1b220ad945f5b88795170ca71f7a8d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dmvk commented on a change in pull request #17137: [FLINK-21853][e2e] Harden common_ha.sh#ha_tm_watchdog

2021-09-16 Thread GitBox


dmvk commented on a change in pull request #17137:
URL: https://github.com/apache/flink/pull/17137#discussion_r702880423



##
File path: flink-end-to-end-tests/test-scripts/common_ha.sh
##
@@ -127,6 +127,18 @@ function kill_single {
 echo "Killed JM @ ${PID}"
 }
 
+function start_expected_num_tms() {
+  local EXPECTED_TMS=$1
+
+  local RUNNING_TMS=`jps | grep 'TaskManager' | wc -l`
+
+  while [ "${RUNNING_TMS}" -lt "${EXPECTED_TMS}" ]; do
+  echo "Starting new TM."
+  "$FLINK_DIR"/bin/taskmanager.sh start > /dev/null
+  RUNNING_TMS=$(( $RUNNING_TMS + 1 ))

Review comment:
   ```suggestion
 RUNNING_TMS=$(( RUNNING_TMS + 1 ))
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dmvk commented on pull request #17159: [FLINK-24129][connectors-pulsar] Harden TopicRangeTest.rangeCreationH…

2021-09-16 Thread GitBox


dmvk commented on pull request #17159:
URL: https://github.com/apache/flink/pull/17159#issuecomment-921280507


   @XComp Thanks for the review, I've addressed your comments, PTAL


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17304: [FLINK-23969][connector/pulsar] Create e2e tests for pulsar connector. [1.14]

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17304:
URL: https://github.com/apache/flink/pull/17304#issuecomment-921026449


   
   ## CI report:
   
   * 22b0273f342094db631be2361a53b7c81b853f6e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24228)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17264: [FLINK-24253][JDBC] Load JdbcDialects via Service Loaders

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17264:
URL: https://github.com/apache/flink/pull/17264#issuecomment-918259683


   
   ## CI report:
   
   * 40850458353ee9014c1e9b28c1a6907c97896fc6 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24227)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-24301) use Async transport on all statefun playground examples

2021-09-16 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416329#comment-17416329
 ] 

Seth Wiesman commented on FLINK-24301:
--

fixed in dev: dffc59428a91853332a038e9d13970108d2302e9

release-3.1: 33bbe6a6f1f437afda24770958844c74d5d8f81e

> use Async transport on all statefun playground examples
> ---
>
> Key: FLINK-24301
> URL: https://issues.apache.org/jira/browse/FLINK-24301
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-24301) use Async transport on all statefun playground examples

2021-09-16 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman closed FLINK-24301.

Resolution: Fixed

> use Async transport on all statefun playground examples
> ---
>
> Key: FLINK-24301
> URL: https://issues.apache.org/jira/browse/FLINK-24301
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-24284) Add a greeter and a showcase for the JavaScript SDK

2021-09-16 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman closed FLINK-24284.

Resolution: Done

> Add a greeter and a showcase for the JavaScript SDK
> ---
>
> Key: FLINK-24284
> URL: https://issues.apache.org/jira/browse/FLINK-24284
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Assignee: Igal Shilman
>Priority: Major
>  Labels: pull-request-available
>
> We need to add a greeter and a showcase for the Javascript SDK to the 
> playground.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-24284) Add a greeter and a showcase for the JavaScript SDK

2021-09-16 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-24284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17416327#comment-17416327
 ] 

Seth Wiesman commented on FLINK-24284:
--

fixed in dev: 3b52dfc554540c03dbd4b2acfa4440ceb8914e38

> Add a greeter and a showcase for the JavaScript SDK
> ---
>
> Key: FLINK-24284
> URL: https://issues.apache.org/jira/browse/FLINK-24284
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Assignee: Igal Shilman
>Priority: Major
>  Labels: pull-request-available
>
> We need to add a greeter and a showcase for the Javascript SDK to the 
> playground.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-24301) use Async transport on all statefun playground examples

2021-09-16 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-24301:
---
Labels: pull-request-available  (was: )

> use Async transport on all statefun playground examples
> ---
>
> Key: FLINK-24301
> URL: https://issues.apache.org/jira/browse/FLINK-24301
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun-playground] sjwiesman closed pull request #14: [FLINK-24301] use Async transport on all statefun playground examples

2021-09-16 Thread GitBox


sjwiesman closed pull request #14:
URL: https://github.com/apache/flink-statefun-playground/pull/14


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17151: [FLINK-23180] Do not initialize checkpoint base locations when checkp…

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17151:
URL: https://github.com/apache/flink/pull/17151#issuecomment-913110826


   
   ## CI report:
   
   * 4b431b9e8b67caf75739a39180715e0fcc23279e UNKNOWN
   * 04ed85a5788c897214f073694f8a3196688b0c7e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24225)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17294: [BP-1.14][FLINK-24281][connectors/kafka] Migrate all format tests from FlinkKafkaProducer to KafkaSink

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17294:
URL: https://github.com/apache/flink/pull/17294#issuecomment-920662267


   
   ## CI report:
   
   * 3f048d3f5981814b9c0967e7efc760a06ee0e72a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24210)
 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24221)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17187: [FLINK-11250][runtime] Added method init for RecordWriter for initialization resources(OutputFlusher) outside of constructor

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17187:
URL: https://github.com/apache/flink/pull/17187#issuecomment-914504554


   
   ## CI report:
   
   * 9295f1fa3a30daa92b0efef34a61f073acf10151 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24223)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17151: [FLINK-23180] Do not initialize checkpoint base locations when checkp…

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17151:
URL: https://github.com/apache/flink/pull/17151#issuecomment-913110826


   
   ## CI report:
   
   * 4b431b9e8b67caf75739a39180715e0fcc23279e UNKNOWN
   * 04ed85a5788c897214f073694f8a3196688b0c7e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24225)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #16994: [FLINK-23991][yarn] Specifying yarn.staging-dir fail when staging scheme is…

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #16994:
URL: https://github.com/apache/flink/pull/16994#issuecomment-906233672


   
   ## CI report:
   
   * aed2811ee91a0d21c1c3af54b7c2a84a48142ce6 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24222)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17304: [FLINK-23969][connector/pulsar] Create e2e tests for pulsar connector. [1.14]

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17304:
URL: https://github.com/apache/flink/pull/17304#issuecomment-921026449


   
   ## CI report:
   
   * ccbe0fd32d2b00f085e073a5ea0e2d3dfc863ac5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24226)
 
   * 22b0273f342094db631be2361a53b7c81b853f6e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24228)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #17264: [FLINK-24253][JDBC] Load JdbcDialects via Service Loaders

2021-09-16 Thread GitBox


flinkbot edited a comment on pull request #17264:
URL: https://github.com/apache/flink/pull/17264#issuecomment-918259683


   
   ## CI report:
   
   * 37bbf283bb26fd04ee6aae33578ba81ec7223e73 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24020)
 
   * 40850458353ee9014c1e9b28c1a6907c97896fc6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=24227)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-24300) MultipleInputOperator is running much more slowly in TPCDS

2021-09-16 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-24300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-24300:
-
Fix Version/s: (was: 1.15.0)

> MultipleInputOperator is running much more slowly in TPCDS
> --
>
> Key: FLINK-24300
> URL: https://issues.apache.org/jira/browse/FLINK-24300
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.14.0, 1.15.0
>Reporter: Zhilong Hong
>Assignee: Dawid Wysakowicz
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.14.0
>
> Attachments: 64570e4c56955713ca599fd1d7ae7be891a314c6.png, 
> detail-of-the-job.png, e3010c16947ed8da2ecb7d89a3aa08dacecc524a.png, 
> jstack-2.txt, jstack.txt
>
>
> When we are running TPCDS with release 1.14 we find that the job with 
> {{MultipleInputOperator}} is running much more slowly than before. With a 
> binary search among the commits, we find that the issue may be introduced by 
> FLINK-23408. 
> At the commit 64570e4c56955713ca599fd1d7ae7be891a314c6, the job in TPCDS runs 
> normally, as the image below illustrates:
> !64570e4c56955713ca599fd1d7ae7be891a314c6.png|width=600!
> At the commit e3010c16947ed8da2ecb7d89a3aa08dacecc524a, the job q2.sql gets 
> stuck for a pretty long time (longer than half an hour), as the image below 
> illustrates:
> !e3010c16947ed8da2ecb7d89a3aa08dacecc524a.png|width=600!
> The detail of the job is illustrated below:
> !detail-of-the-job.png|width=600!
> The job uses a {{MultipleInputOperator}} with one normal input and two 
> chained FileSource. It has finished reading the normal input and start to 
> read the chained source. Each chained source has one source data fetcher.
> We capture the jstack of the stuck tasks and attach the file below. From the 
> [^jstack.txt] we can see the main thread is blocked on waiting for the lock, 
> and the lock is held by a source data fetcher. The source data fetcher is 
> still running but the stack keeps on {{CompletableFuture.cleanStack}}.
> This issue happens in a batch job. However, from where it get blocked, it 
> seems also affects the streaming jobs.
> For the reference, the code of TPCDS we are running is located at 
> [https://github.com/ververica/flink-sql-benchmark/tree/dev].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dawidwys edited a comment on pull request #17303: [FLINK-24300] SourceOperator#getAvailableFuture reuses future

2021-09-16 Thread GitBox


dawidwys edited a comment on pull request #17303:
URL: https://github.com/apache/flink/pull/17303#issuecomment-921137756


   I triggered a benchmark request: 
http://codespeed.dak8s.net:8080/job/flink-benchmark-request/494/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on pull request #17303: [FLINK-24300] SourceOperator#getAvailableFuture reuses future

2021-09-16 Thread GitBox


dawidwys commented on pull request #17303:
URL: https://github.com/apache/flink/pull/17303#issuecomment-921137756


   I triggered benchmark request: 
http://codespeed.dak8s.net:8080/job/flink-benchmark-request/494/


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on a change in pull request #17303: [FLINK-24300] SourceOperator#getAvailableFuture reuses future

2021-09-16 Thread GitBox


dawidwys commented on a change in pull request #17303:
URL: https://github.com/apache/flink/pull/17303#discussion_r710365769



##
File path: 
flink-tests/src/test/java/org/apache/flink/test/streaming/runtime/SourceNAryInputChainingITCase.java
##
@@ -142,6 +153,26 @@ public void testMixedInputsWithMultipleUnionsExecution() 
throws Exception {
 verifySequence(result, 1L, 60L);
 }
 
+// This tests FLINK-24300. The timeout is put here on purpose. Without the 
fix
+// the tests take very long, but still finishes, so it would not hit the 
global timeout
+@Test(timeout = 3_000)

Review comment:
   I added it as a benchmark here: 
https://github.com/apache/flink-benchmarks/pull/31




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@flink.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >