[GitHub] [flink] flinkbot commented on pull request #14010: [FLINK-19842][python] remove the check of counter

2020-11-09 Thread GitBox


flinkbot commented on pull request #14010:
URL: https://github.com/apache/flink/pull/14010#issuecomment-724529514


   
   ## CI report:
   
   * 335fc0d6cef066292755bee1729eafc65d84cf92 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13996: [FLINK-19882][e2e] Properly forward exit code in test

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13996:
URL: https://github.com/apache/flink/pull/13996#issuecomment-723899813


   
   ## CI report:
   
   * bb9b5d26caef5bc121e02b3c3115af3c4784507d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9353)
 
   * 647e526d2927dbf22ec506f2aadf396ccc6d8a09 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9384)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13985: [FLINK-19697] Make the Committer/GlobalCommitter retry-able

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13985:
URL: https://github.com/apache/flink/pull/13985#issuecomment-723646083


   
   ## CI report:
   
   * 9917da1cba5acff13ee14a68fa58763f2b430c75 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9318)
 
   * ff1f9338cfb574457655fb2bd94a6581317f36c7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9383)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13972: [FLINK-19912][json] Fix JSON format fails to serialize map value with…

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13972:
URL: https://github.com/apache/flink/pull/13972#issuecomment-723244452


   
   ## CI report:
   
   * 18bd1620f55d1363b4d3173dc2e9c14e83ba859b UNKNOWN
   * f7b60d2413a5a86136b1d00d319dca668787559d UNKNOWN
   * e1393af2a259f5db3cd78f49130412df83a062a5 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9357)
 
   * e570f8d0dcd05968a54fd495f0aae838b8cc17b5 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-20020) Make UnsuccessfulExecutionException part of the JobClient.getJobExecutionResult() contract.

2020-11-09 Thread Kostas Kloudas (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kostas Kloudas reassigned FLINK-20020:
--

Assignee: Nicholas Jiang

> Make UnsuccessfulExecutionException part of the 
> JobClient.getJobExecutionResult() contract.
> ---
>
> Key: FLINK-20020
> URL: https://issues.apache.org/jira/browse/FLINK-20020
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Affects Versions: 1.12.0
>Reporter: Kostas Kloudas
>Assignee: Nicholas Jiang
>Priority: Major
>
> Currently, different implementations of the {{JobClient}} throw different 
> exceptions. The {{ClusterClientJobClientAdapter}} wraps the exception from 
> the {{JobResult.toJobExecutionResult()}} into a 
> {{ProgramInvocationException}}, the {{MiniClusterJobClient}} simply wraps it 
> in a {{CompletionException}} and the {{EmbeddedJobClient}} wraps it into an 
> {{UnsuccessfulExecutionException}}. 
> With this issue I would like to propose making the exception uniform and part 
> of the contract and as a candidate I would propose the behaviour of the 
> {{EmbeddedJobClient}} which throws an {{UnsuccessfulExecutionException}}. The 
> reason is that this exception also includes the status of the application.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19842) PyFlinkStreamUserDefinedTableFunctionTests.test_table_function_with_sql_query is unstable

2020-11-09 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-19842:
--

Assignee: Huang Xingbo

> PyFlinkStreamUserDefinedTableFunctionTests.test_table_function_with_sql_query 
> is unstable
> -
>
> Key: FLINK-19842
> URL: https://issues.apache.org/jira/browse/FLINK-19842
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Huang Xingbo
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8401=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3
> {code}
> === FAILURES 
> ===
> _ 
> PyFlinkStreamUserDefinedTableFunctionTests.test_table_function_with_sql_query 
> _
> self = 
>  testMethod=test_table_function_with_sql_query>
> def test_table_function_with_sql_query(self):
> self._register_table_sink(
> ['a', 'b', 'c'],
> [DataTypes.BIGINT(), DataTypes.BIGINT(), DataTypes.BIGINT()])
> 
> self.t_env.create_temporary_system_function(
> "multi_emit", udtf(MultiEmit(), result_types=[DataTypes.BIGINT(), 
> DataTypes.BIGINT()]))
> 
> t = self.t_env.from_elements([(1, 1, 3), (2, 1, 6), (3, 2, 9)], ['a', 
> 'b', 'c'])
> self.t_env.register_table("MyTable", t)
> t = self.t_env.sql_query(
> "SELECT a, x, y FROM MyTable LEFT JOIN LATERAL 
> TABLE(multi_emit(a, b)) as T(x, y)"
> " ON TRUE")
> actual = self._get_output(t)
> >   self.assert_equals(actual, ["1,1,0", "2,2,0", "3,3,0", "3,3,1"])
> pyflink/table/tests/test_udtf.py:61: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> cls =  'pyflink.table.tests.test_udtf.PyFlinkStreamUserDefinedTableFunctionTests'>
> actual = JavaObject id=o37759, expected = ['1,1,0', '2,2,0', '3,3,0', '3,3,1']
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20069) docs_404_check doesn't work properly

2020-11-09 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229034#comment-17229034
 ] 

Robert Metzger commented on FLINK-20069:


Thanks a lot!

> docs_404_check doesn't work properly
> 
>
> Key: FLINK-20069
> URL: https://issues.apache.org/jira/browse/FLINK-20069
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9361=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43
> {code}
> Starting: CmdLine
> ==
> Task : Command line
> Description  : Run a command line script using Bash on Linux and macOS and 
> cmd.exe on Windows
> Version  : 2.177.3
> Author   : Microsoft Corporation
> Help : 
> https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
> ==
> Generating script.
> Script contents:
> exec ./tools/ci/docs.sh
> == Starting Command Output ===
> /bin/bash --noprofile --norc 
> /home/vsts/work/_temp/71629bf0-181a-4981-a18d-44e3c94229f1.sh
> Waiting for server...
> [DEPRECATED] The `--path` flag is deprecated because it relies on being 
> remembered across bundler invocations, which bundler will no longer do in 
> future versions. Instead please use `bundle config set path 
> '/home/vsts/gem_cache'`, and stop using this flag
> Fetching gem metadata from https://rubygems.org/.
> jekyll-4.0.1 requires rubygems version >= 2.7.0, which is incompatible with 
> the
> current version, 2.6.14.4
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20069) docs_404_check doesn't work properly

2020-11-09 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229031#comment-17229031
 ] 

Dian Fu commented on FLINK-20069:
-

[~rmetzger] I will take a look at this issue.

> docs_404_check doesn't work properly
> 
>
> Key: FLINK-20069
> URL: https://issues.apache.org/jira/browse/FLINK-20069
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Major
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9361=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43
> {code}
> Starting: CmdLine
> ==
> Task : Command line
> Description  : Run a command line script using Bash on Linux and macOS and 
> cmd.exe on Windows
> Version  : 2.177.3
> Author   : Microsoft Corporation
> Help : 
> https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
> ==
> Generating script.
> Script contents:
> exec ./tools/ci/docs.sh
> == Starting Command Output ===
> /bin/bash --noprofile --norc 
> /home/vsts/work/_temp/71629bf0-181a-4981-a18d-44e3c94229f1.sh
> Waiting for server...
> [DEPRECATED] The `--path` flag is deprecated because it relies on being 
> remembered across bundler invocations, which bundler will no longer do in 
> future versions. Instead please use `bundle config set path 
> '/home/vsts/gem_cache'`, and stop using this flag
> Fetching gem metadata from https://rubygems.org/.
> jekyll-4.0.1 requires rubygems version >= 2.7.0, which is incompatible with 
> the
> current version, 2.6.14.4
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20069) docs_404_check doesn't work properly

2020-11-09 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-20069:
---

Assignee: Dian Fu

> docs_404_check doesn't work properly
> 
>
> Key: FLINK-20069
> URL: https://issues.apache.org/jira/browse/FLINK-20069
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Assignee: Dian Fu
>Priority: Major
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9361=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43
> {code}
> Starting: CmdLine
> ==
> Task : Command line
> Description  : Run a command line script using Bash on Linux and macOS and 
> cmd.exe on Windows
> Version  : 2.177.3
> Author   : Microsoft Corporation
> Help : 
> https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
> ==
> Generating script.
> Script contents:
> exec ./tools/ci/docs.sh
> == Starting Command Output ===
> /bin/bash --noprofile --norc 
> /home/vsts/work/_temp/71629bf0-181a-4981-a18d-44e3c94229f1.sh
> Waiting for server...
> [DEPRECATED] The `--path` flag is deprecated because it relies on being 
> remembered across bundler invocations, which bundler will no longer do in 
> future versions. Instead please use `bundle config set path 
> '/home/vsts/gem_cache'`, and stop using this flag
> Fetching gem metadata from https://rubygems.org/.
> jekyll-4.0.1 requires rubygems version >= 2.7.0, which is incompatible with 
> the
> current version, 2.6.14.4
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20069) docs_404_check doesn't work properly

2020-11-09 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229030#comment-17229030
 ] 

Robert Metzger commented on FLINK-20069:


[~dian.fu] can you look into it, or should I?

> docs_404_check doesn't work properly
> 
>
> Key: FLINK-20069
> URL: https://issues.apache.org/jira/browse/FLINK-20069
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Major
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9361=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43
> {code}
> Starting: CmdLine
> ==
> Task : Command line
> Description  : Run a command line script using Bash on Linux and macOS and 
> cmd.exe on Windows
> Version  : 2.177.3
> Author   : Microsoft Corporation
> Help : 
> https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
> ==
> Generating script.
> Script contents:
> exec ./tools/ci/docs.sh
> == Starting Command Output ===
> /bin/bash --noprofile --norc 
> /home/vsts/work/_temp/71629bf0-181a-4981-a18d-44e3c94229f1.sh
> Waiting for server...
> [DEPRECATED] The `--path` flag is deprecated because it relies on being 
> remembered across bundler invocations, which bundler will no longer do in 
> future versions. Instead please use `bundle config set path 
> '/home/vsts/gem_cache'`, and stop using this flag
> Fetching gem metadata from https://rubygems.org/.
> jekyll-4.0.1 requires rubygems version >= 2.7.0, which is incompatible with 
> the
> current version, 2.6.14.4
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20069) docs_404_check doesn't work properly

2020-11-09 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-20069:
---
Issue Type: Bug  (was: Improvement)

> docs_404_check doesn't work properly
> 
>
> Key: FLINK-20069
> URL: https://issues.apache.org/jira/browse/FLINK-20069
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Major
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9361=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43
> {code}
> Starting: CmdLine
> ==
> Task : Command line
> Description  : Run a command line script using Bash on Linux and macOS and 
> cmd.exe on Windows
> Version  : 2.177.3
> Author   : Microsoft Corporation
> Help : 
> https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
> ==
> Generating script.
> Script contents:
> exec ./tools/ci/docs.sh
> == Starting Command Output ===
> /bin/bash --noprofile --norc 
> /home/vsts/work/_temp/71629bf0-181a-4981-a18d-44e3c94229f1.sh
> Waiting for server...
> [DEPRECATED] The `--path` flag is deprecated because it relies on being 
> remembered across bundler invocations, which bundler will no longer do in 
> future versions. Instead please use `bundle config set path 
> '/home/vsts/gem_cache'`, and stop using this flag
> Fetching gem metadata from https://rubygems.org/.
> jekyll-4.0.1 requires rubygems version >= 2.7.0, which is incompatible with 
> the
> current version, 2.6.14.4
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20069) docs_404_check doesn't work properly

2020-11-09 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-20069:
---
Fix Version/s: 1.12.0

> docs_404_check doesn't work properly
> 
>
> Key: FLINK-20069
> URL: https://issues.apache.org/jira/browse/FLINK-20069
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Priority: Major
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9361=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43
> {code}
> Starting: CmdLine
> ==
> Task : Command line
> Description  : Run a command line script using Bash on Linux and macOS and 
> cmd.exe on Windows
> Version  : 2.177.3
> Author   : Microsoft Corporation
> Help : 
> https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
> ==
> Generating script.
> Script contents:
> exec ./tools/ci/docs.sh
> == Starting Command Output ===
> /bin/bash --noprofile --norc 
> /home/vsts/work/_temp/71629bf0-181a-4981-a18d-44e3c94229f1.sh
> Waiting for server...
> [DEPRECATED] The `--path` flag is deprecated because it relies on being 
> remembered across bundler invocations, which bundler will no longer do in 
> future versions. Instead please use `bundle config set path 
> '/home/vsts/gem_cache'`, and stop using this flag
> Fetching gem metadata from https://rubygems.org/.
> jekyll-4.0.1 requires rubygems version >= 2.7.0, which is incompatible with 
> the
> current version, 2.6.14.4
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> Waiting for server...
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangxlong commented on a change in pull request #13972: [FLINK-19912][json] Fix JSON format fails to serialize map value with…

2020-11-09 Thread GitBox


wangxlong commented on a change in pull request #13972:
URL: https://github.com/apache/flink/pull/13972#discussion_r520344156



##
File path: 
flink-formats/flink-json/src/main/java/org/apache/flink/formats/json/RowDataToJsonConverters.java
##
@@ -272,8 +271,10 @@ private RowDataToJsonConverter createMapConverter(
case DROP:
continue;
case FAIL:
-   throw new 
RuntimeException("Map key is null, please have a check."
-   + " You can 
setup null key handling mode to drop entry or replace with a no-null literal.");
+   throw new 
RuntimeException(String.format(
+   "JSON format 
doesn't support to serialize map data with null keys. "
+   + "You 
can drop null key entries or encode null in literals by specifying %s option.",
+   
JsonOptions.MAP_NULL_KEY_LITERAL.key()));

Review comment:
   done~





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14009: [FLINK-20064][docs] Fix the broken links

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #14009:
URL: https://github.com/apache/flink/pull/14009#issuecomment-724507562


   
   ## CI report:
   
   * af7a44f12f20bf3a2a02d007128bb1f57e651a83 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9382)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13996: [FLINK-19882][e2e] Properly forward exit code in test

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13996:
URL: https://github.com/apache/flink/pull/13996#issuecomment-723899813


   
   ## CI report:
   
   * bb9b5d26caef5bc121e02b3c3115af3c4784507d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9353)
 
   * 647e526d2927dbf22ec506f2aadf396ccc6d8a09 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14008: [FLINK-20013][network] BoundedBlockingSubpartition may leak network buffer if task is failed or canceled

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #14008:
URL: https://github.com/apache/flink/pull/14008#issuecomment-724507299


   
   ## CI report:
   
   * 409377bf03bf344f259a1386a225ea33b510626c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9381)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18070) Time attribute been materialized after sub graph optimize

2020-11-09 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he closed FLINK-18070.
--
Resolution: Fixed

master: https://github.com/apache/flink/pull/13280

> Time attribute been materialized after sub graph optimize
> -
>
> Key: FLINK-18070
> URL: https://issues.apache.org/jira/browse/FLINK-18070
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0
>Reporter: YufeiLiu
>Assignee: YufeiLiu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Hi, I want to use window aggregate after create temporary, and has multiple 
> sinks. But throw exception:
> {code:java}
> java.lang.AssertionError: type mismatch:
> ref:
> TIME ATTRIBUTE(PROCTIME) NOT NULL
> input:
> TIMESTAMP(3) NOT NULL
> {code}
> I look into the optimizer logic, there is comment at 
> {{CommonSubGraphBasedOptimizer}}:
> "1. In general, for multi-sinks users tend to use VIEW which is a natural 
> common sub-graph."
> After sub graph optimize, time attribute from source have been convert to 
> basic TIMESTAMP type according to {{FlinkRelTimeIndicatorProgram}}. But my 
> create view sql is simple query, I think didn't need to materialized time 
> attribute in theory.
> Here is my code:
> {code:java}
> // connector.type COLLECTION is for debug use
> tableEnv.sqlUpdate("CREATE TABLE source (\n" +
>   "`ts` AS PROCTIME(),\n" +
>   "`order_type` INT\n" +
>   ") WITH (\n" +
>   "'connector.type' = 'COLLECTION',\n" +
>   "'format.type' = 'json'\n" +
>   ")\n");
> tableEnv.createTemporaryView("source_view", tableEnv.sqlQuery("SELECT * FROM 
> source"));
> tableEnv.sqlUpdate("CREATE TABLE sink (\n" +
>   "`result` BIGINT\n" +
>   ") WITH (\n" +
>   "'connector.type' = 'COLLECTION',\n" +
>   "'format.type' = 'json'\n" +
>   ")\n");
> tableEnv.sqlUpdate("INSERT INTO sink \n" +
>   "SELECT\n" +
>   "COUNT(1)\n" +
>   "FROM\n" +
>   "`source_view`\n" +
>   "WHERE\n" +
>   " `order_type` = 33\n" +
>   "GROUP BY\n" +
>   "TUMBLE(`ts`, INTERVAL '5' SECOND)\n");
> tableEnv.sqlUpdate("INSERT INTO sink \n" +
>   "SELECT\n" +
>   "COUNT(1)\n" +
>   "FROM\n" +
>   "`source_view`\n" +
>   "WHERE\n" +
>   " `order_type` = 34\n" +
>   "GROUP BY\n" +
>   "TUMBLE(`ts`, INTERVAL '5' SECOND)\n");
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13985: [FLINK-19697] Make the Committer/GlobalCommitter retry-able

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13985:
URL: https://github.com/apache/flink/pull/13985#issuecomment-723646083


   
   ## CI report:
   
   * 9917da1cba5acff13ee14a68fa58763f2b430c75 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9318)
 
   * ff1f9338cfb574457655fb2bd94a6581317f36c7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #13997: [FLINK-20058][kafka connector] Improve tests for per-partition-waterm…

2020-11-09 Thread GitBox


wuchong commented on a change in pull request #13997:
URL: https://github.com/apache/flink/pull/13997#discussion_r520338895



##
File path: 
flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesRuntimeFunctions.java
##
@@ -318,9 +318,9 @@ public void invoke(RowData value, Context context) throws 
Exception {
Row row = (Row) converter.toExternal(value);
assert row != null;
if (rowtimeIndex >= 0) {
-   LocalDateTime rowtime = (LocalDateTime) 
row.getField(rowtimeIndex);
+   TimestampData rowtime = 
TimestampData.fromLocalDateTime((LocalDateTime) row.getField(rowtimeIndex));

Review comment:
   ```java
// currently, rowtime attribute always 
using 3 precision
TimestampData rowtime = 
value.getTimestamp(rowtimeIndex, 3);
   ```

##
File path: 
flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesRuntimeFunctions.java
##
@@ -318,9 +318,9 @@ public void invoke(RowData value, Context context) throws 
Exception {
Row row = (Row) converter.toExternal(value);

Review comment:
   Remove the `@SuppressWarnings("rawtypes")` annotation on this method.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe merged pull request #13280: [FLINK-18070][table-planner-blink] Don't materialize time attribute in SubGraphOptimize

2020-11-09 Thread GitBox


godfreyhe merged pull request #13280:
URL: https://github.com/apache/flink/pull/13280


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Issue Comment Deleted] (FLINK-19682) Actively timeout checkpoint barriers on the inputs

2020-11-09 Thread Nicholas Jiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Jiang updated FLINK-19682:
---
Comment: was deleted

(was: [~pnowojski], yeah, I'm very willing to work on that tight schedule and 
closely coordinate with you. I come from the team of Becket Qin, therefore I 
could work with you closely enough.)

> Actively timeout checkpoint barriers on the inputs
> --
>
> Key: FLINK-19682
> URL: https://issues.apache.org/jira/browse/FLINK-19682
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.0
>Reporter: Piotr Nowojski
>Assignee: Nicholas Jiang
>Priority: Minor
>
> After receiving the first checkpoint barrier announcement, we should some 
> kind of register a processing time timeout to switch to unaligned checkpoint.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20065) UnalignedCheckpointCompatibilityITCase.test failed with AskTimeoutException

2020-11-09 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229026#comment-17229026
 ] 

Xintong Song commented on FLINK-20065:
--

[~AHeise], could you help take a look at this?

> UnalignedCheckpointCompatibilityITCase.test failed with AskTimeoutException
> ---
>
> Key: FLINK-20065
> URL: https://issues.apache.org/jira/browse/FLINK-20065
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.3
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.3
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9362=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=45cc9205-bdb7-5b54-63cd-89fdc0983323
> {code}
> 2020-11-09T22:19:47.2714024Z [ERROR] test[type: SAVEPOINT, startAligned: 
> true](org.apache.flink.test.checkpointing.UnalignedCheckpointCompatibilityITCase)
>   Time elapsed: 1.293 s  <<< ERROR!
> 2020-11-09T22:19:47.2715260Z java.util.concurrent.ExecutionException: 
> java.util.concurrent.TimeoutException: Invocation of public default 
> java.util.concurrent.CompletableFuture 
> org.apache.flink.runtime.webmonitor.RestfulGateway.stopWithSavepoint(org.apache.flink.api.common.JobID,java.lang.String,boolean,org.apache.flink.api.common.time.Time)
>  timed out.
> 2020-11-09T22:19:47.2716743Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-11-09T22:19:47.2718213Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-11-09T22:19:47.2719166Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointCompatibilityITCase.runAndTakeSavepoint(UnalignedCheckpointCompatibilityITCase.java:113)
> 2020-11-09T22:19:47.2720278Z  at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointCompatibilityITCase.test(UnalignedCheckpointCompatibilityITCase.java:97)
> 2020-11-09T22:19:47.2721126Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-09T22:19:47.2721771Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-09T22:19:47.2722773Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-09T22:19:47.2723479Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-09T22:19:47.2724187Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-09T22:19:47.2725026Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-09T22:19:47.2725817Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-09T22:19:47.2726595Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-09T22:19:47.2727515Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-11-09T22:19:47.2728192Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-11-09T22:19:47.2744089Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-11-09T22:19:47.2744907Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-11-09T22:19:47.2745573Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-11-09T22:19:47.2746037Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-09T22:19:47.2746445Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-09T22:19:47.2746868Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-09T22:19:47.2747443Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-09T22:19:47.2747876Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-09T22:19:47.2748297Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-09T22:19:47.2748694Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-11-09T22:19:47.2749054Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-11-09T22:19:47.2749414Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-09T22:19:47.2749819Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-09T22:19:47.2750373Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-09T22:19:47.2750923Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-09T22:19:47.2751555Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-09T22:19:47.2752148Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-09T22:19:47.2752938Z  at 
> 

[jira] [Updated] (FLINK-19842) PyFlinkStreamUserDefinedTableFunctionTests.test_table_function_with_sql_query is unstable

2020-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19842?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19842:
---
Labels: pull-request-available test-stability  (was: test-stability)

> PyFlinkStreamUserDefinedTableFunctionTests.test_table_function_with_sql_query 
> is unstable
> -
>
> Key: FLINK-19842
> URL: https://issues.apache.org/jira/browse/FLINK-19842
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8401=logs=9cada3cb-c1d3-5621-16da-0f718fb86602=8d78fe4f-d658-5c70-12f8-4921589024c3
> {code}
> === FAILURES 
> ===
> _ 
> PyFlinkStreamUserDefinedTableFunctionTests.test_table_function_with_sql_query 
> _
> self = 
>  testMethod=test_table_function_with_sql_query>
> def test_table_function_with_sql_query(self):
> self._register_table_sink(
> ['a', 'b', 'c'],
> [DataTypes.BIGINT(), DataTypes.BIGINT(), DataTypes.BIGINT()])
> 
> self.t_env.create_temporary_system_function(
> "multi_emit", udtf(MultiEmit(), result_types=[DataTypes.BIGINT(), 
> DataTypes.BIGINT()]))
> 
> t = self.t_env.from_elements([(1, 1, 3), (2, 1, 6), (3, 2, 9)], ['a', 
> 'b', 'c'])
> self.t_env.register_table("MyTable", t)
> t = self.t_env.sql_query(
> "SELECT a, x, y FROM MyTable LEFT JOIN LATERAL 
> TABLE(multi_emit(a, b)) as T(x, y)"
> " ON TRUE")
> actual = self._get_output(t)
> >   self.assert_equals(actual, ["1,1,0", "2,2,0", "3,3,0", "3,3,1"])
> pyflink/table/tests/test_udtf.py:61: 
> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
> _ 
> cls =  'pyflink.table.tests.test_udtf.PyFlinkStreamUserDefinedTableFunctionTests'>
> actual = JavaObject id=o37759, expected = ['1,1,0', '2,2,0', '3,3,0', '3,3,1']
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14010: [FLINK-19842][python] remove the check of counter

2020-11-09 Thread GitBox


flinkbot commented on pull request #14010:
URL: https://github.com/apache/flink/pull/14010#issuecomment-724512963


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 335fc0d6cef066292755bee1729eafc65d84cf92 (Tue Nov 10 
07:16:52 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-19842).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-13733) FlinkKafkaInternalProducerITCase.testHappyPath fails on Travis

2020-11-09 Thread Qingsheng Ren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229025#comment-17229025
 ] 

Qingsheng Ren commented on FLINK-13733:
---

[~rmetzger] Sorry for my late response! I'll create a PR to fix this now. 
Thanks for your reminder~ 

> FlinkKafkaInternalProducerITCase.testHappyPath fails on Travis
> --
>
> Key: FLINK-13733
> URL: https://issues.apache.org/jira/browse/FLINK-13733
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.9.0, 1.10.0, 1.11.0, 1.12.0
>Reporter: Till Rohrmann
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
> Attachments: 20200421.13.tar.gz
>
>
> The {{FlinkKafkaInternalProducerITCase.testHappyPath}} fails on Travis with 
> {code}
> Test 
> testHappyPath(org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase)
>  failed with:
> java.util.NoSuchElementException
>   at 
> org.apache.kafka.common.utils.AbstractIterator.next(AbstractIterator.java:52)
>   at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.getOnlyElement(Iterators.java:302)
>   at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Iterables.getOnlyElement(Iterables.java:289)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.assertRecord(FlinkKafkaInternalProducerITCase.java:169)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.testHappyPath(FlinkKafkaInternalProducerITCase.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> https://api.travis-ci.org/v3/job/571870358/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo opened a new pull request #14010: [FLINK-19842][python] remove the check of counter

2020-11-09 Thread GitBox


HuangXingBo opened a new pull request #14010:
URL: https://github.com/apache/flink/pull/14010


   ## What is the purpose of the change
   
   *This pull request  will remove the check of counter*
   
   
   ## Brief change log
   
 - *remove the check of counter*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #13996: [FLINK-19882][e2e] Properly forward exit code in test

2020-11-09 Thread GitBox


rmetzger commented on pull request #13996:
URL: https://github.com/apache/flink/pull/13996#issuecomment-724510239


   After the build failure, I had to change the implementation quite a bit. The 
build should go through this time ;) 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on a change in pull request #13972: [FLINK-19912][json] Fix JSON format fails to serialize map value with…

2020-11-09 Thread GitBox


wuchong commented on a change in pull request #13972:
URL: https://github.com/apache/flink/pull/13972#discussion_r520336181



##
File path: 
flink-formats/flink-json/src/main/java/org/apache/flink/formats/json/RowDataToJsonConverters.java
##
@@ -272,8 +271,10 @@ private RowDataToJsonConverter createMapConverter(
case DROP:
continue;
case FAIL:
-   throw new 
RuntimeException("Map key is null, please have a check."
-   + " You can 
setup null key handling mode to drop entry or replace with a no-null literal.");
+   throw new 
RuntimeException(String.format(
+   "JSON format 
doesn't support to serialize map data with null keys. "
+   + "You 
can drop null key entries or encode null in literals by specifying %s option.",
+   
JsonOptions.MAP_NULL_KEY_LITERAL.key()));

Review comment:
   Sorry, it should be `MAP_NULL_KEY_MODE` option.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-20070) NPE in SourceCoordinatorProviderTest.testCheckpointAndReset

2020-11-09 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-20070.
---
Fix Version/s: (was: 1.12.0)
   Resolution: Duplicate

[~godfreyhe] This issue seems duplicate with FLINK-20050 and should have been 
fixed just now.

> NPE in SourceCoordinatorProviderTest.testCheckpointAndReset
> ---
>
> Key: FLINK-20070
> URL: https://issues.apache.org/jira/browse/FLINK-20070
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Reporter: godfrey he
>Priority: Major
>
> https://dev.azure.com/godfreyhe/c147b7ad-1708-46c3-9021-cc523e50c4d5/_apis/build/builds/71/logs/114
> {code:java}
> 2020-11-10T03:41:10.8231846Z [INFO] Running 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorContextTest
> 2020-11-10T03:41:11.2510061Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 1.171 s <<< FAILURE! - in 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest
> 2020-11-10T03:41:11.2511837Z [ERROR] 
> testCheckpointAndReset(org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest)
>   Time elapsed: 1.055 s  <<< ERROR!
> 2020-11-10T03:41:11.2512610Z java.lang.NullPointerException
> 2020-11-10T03:41:11.2513268Z  at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest.testCheckpointAndReset(SourceCoordinatorProviderTest.java:94)
> 2020-11-10T03:41:11.2513967Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-10T03:41:11.2514553Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-10T03:41:11.2515230Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-10T03:41:11.2515827Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-10T03:41:11.2516428Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-10T03:41:11.2517107Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-10T03:41:11.2517757Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-10T03:41:11.2518431Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-10T03:41:11.2519082Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-11-10T03:41:11.2519677Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-11-10T03:41:11.2520292Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-11-10T03:41:11.2521100Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-11-10T03:41:11.2521831Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-10T03:41:11.2522420Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-10T03:41:11.2522988Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-10T03:41:11.2523582Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-10T03:41:11.2524165Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-10T03:41:11.2524951Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-10T03:41:11.2525570Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-11-10T03:41:11.2526288Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-11-10T03:41:11.2526969Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-11-10T03:41:11.2527742Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-11-10T03:41:11.2528467Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-11-10T03:41:11.2529169Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-11-10T03:41:11.2529844Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> 2020-11-10T03:41:11.2530480Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #14009: [FLINK-20064][docs] Fix the broken links

2020-11-09 Thread GitBox


flinkbot commented on pull request #14009:
URL: https://github.com/apache/flink/pull/14009#issuecomment-724507562


   
   ## CI report:
   
   * af7a44f12f20bf3a2a02d007128bb1f57e651a83 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19739) CompileException when windowing in batch mode: A method named "replace" is not declared in any enclosing class nor any supertype

2020-11-09 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-19739:

Component/s: Table SQL / Planner

> CompileException when windowing in batch mode: A method named "replace" is 
> not declared in any enclosing class nor any supertype 
> -
>
> Key: FLINK-19739
> URL: https://issues.apache.org/jira/browse/FLINK-19739
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.12.0, 1.11.2
> Environment: Ubuntu 18.04
> Python 3.8, jar built from master yesterday.
> Or Python 3.7, installed latest version from pip.
>Reporter: Alex Hall
>Priority: Critical
> Fix For: 1.12.0
>
>
> Example script:
> {code:python}
> from pyflink.table import EnvironmentSettings, BatchTableEnvironment
> from pyflink.table.window import Tumble
> env_settings = (
> 
> EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
> )
> table_env = BatchTableEnvironment.create(environment_settings=env_settings)
> table_env.execute_sql(
> """
> CREATE TABLE table1 (
> amount INT,
> ts TIMESTAMP(3),
> WATERMARK FOR ts AS ts - INTERVAL '5' SECOND
> ) WITH (
> 'connector.type' = 'filesystem',
> 'format.type' = 'csv',
> 'connector.path' = '/home/alex/work/test-flink/data1.csv'
> )
> """
> )
> table1 = table_env.from_path("table1")
> table = (
> table1
> .window(Tumble.over("5.days").on("ts").alias("__window"))
> .group_by("__window")
> .select("amount.sum")
> )
> print(table.to_pandas())
> {code}
> Output:
> {code}
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil 
> (file:/home/alex/work/flink/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/opt/flink-python_2.11-1.12-SNAPSHOT.jar)
>  to constructor java.nio.DirectByteBuffer(long,int)
> WARNING: Please consider reporting this to the maintainers of 
> org.apache.flink.api.python.shaded.io.netty.util.internal.ReflectionUtil
> WARNING: Use --illegal-access=warn to enable warnings of further illegal 
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
> /* 1 */
> /* 2 */  public class LocalHashWinAggWithoutKeys$59 extends 
> org.apache.flink.table.runtime.operators.TableStreamOperator
> /* 3 */  implements 
> org.apache.flink.streaming.api.operators.OneInputStreamOperator, 
> org.apache.flink.streaming.api.operators.BoundedOneInput {
> /* 4 */
> /* 5 */private final Object[] references;
> /* 6 */
> /* 7 */private static final org.slf4j.Logger LOG$2 =
> /* 8 */  org.slf4j.LoggerFactory.getLogger("LocalHashWinAgg");
> /* 9 */
> /* 10 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggMapKeyTypes$5;
> /* 11 */private transient 
> org.apache.flink.table.types.logical.LogicalType[] aggBufferTypes$6;
> /* 12 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap 
> aggregateMap$7;
> /* 13 */org.apache.flink.table.data.binary.BinaryRowData 
> emptyAggBuffer$9 = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 14 */org.apache.flink.table.data.writer.BinaryRowWriter 
> emptyAggBufferWriterTerm$10 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(emptyAggBuffer$9);
> /* 15 */org.apache.flink.table.data.GenericRowData hashAggOutput = 
> new org.apache.flink.table.data.GenericRowData(2);
> /* 16 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggMapKey$17 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 17 */private transient 
> org.apache.flink.table.data.binary.BinaryRowData reuseAggBuffer$18 = new 
> org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 18 */private transient 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry 
> reuseAggMapEntry$19 = new 
> org.apache.flink.table.runtime.operators.aggregate.BytesHashMap.Entry(reuseAggMapKey$17,
>  reuseAggBuffer$18);
> /* 19 */org.apache.flink.table.data.binary.BinaryRowData aggMapKey$3 
> = new org.apache.flink.table.data.binary.BinaryRowData(1);
> /* 20 */org.apache.flink.table.data.writer.BinaryRowWriter 
> aggMapKeyWriter$4 = new 
> org.apache.flink.table.data.writer.BinaryRowWriter(aggMapKey$3);
> /* 21 */private boolean hasInput = false;
> /* 22 */org.apache.flink.streaming.runtime.streamrecord.StreamRecord 
> element = new 
> 

[GitHub] [flink] flinkbot commented on pull request #14008: [FLINK-20013][network] BoundedBlockingSubpartition may leak network buffer if task is failed or canceled

2020-11-09 Thread GitBox


flinkbot commented on pull request #14008:
URL: https://github.com/apache/flink/pull/14008#issuecomment-724507299


   
   ## CI report:
   
   * 409377bf03bf344f259a1386a225ea33b510626c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20020) Make UnsuccessfulExecutionException part of the JobClient.getJobExecutionResult() contract.

2020-11-09 Thread Nicholas Jiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229021#comment-17229021
 ] 

Nicholas Jiang commented on FLINK-20020:


[~kkl0u], I agree with your point above mentioned that making the exception 
uniform and part of the contract like the behaviour of the EmbeddedJobClient 
which throws an UnsuccessfulExecutionException including the status of the 
application. For the JobClient, the getJobExecutionResult() should tell users 
the status of the application when throwing the Exception.
And if you don't have time to uniform this, I would like to work for this issue.

> Make UnsuccessfulExecutionException part of the 
> JobClient.getJobExecutionResult() contract.
> ---
>
> Key: FLINK-20020
> URL: https://issues.apache.org/jira/browse/FLINK-20020
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Affects Versions: 1.12.0
>Reporter: Kostas Kloudas
>Priority: Major
>
> Currently, different implementations of the {{JobClient}} throw different 
> exceptions. The {{ClusterClientJobClientAdapter}} wraps the exception from 
> the {{JobResult.toJobExecutionResult()}} into a 
> {{ProgramInvocationException}}, the {{MiniClusterJobClient}} simply wraps it 
> in a {{CompletionException}} and the {{EmbeddedJobClient}} wraps it into an 
> {{UnsuccessfulExecutionException}}. 
> With this issue I would like to propose making the exception uniform and part 
> of the contract and as a candidate I would propose the behaviour of the 
> {{EmbeddedJobClient}} which throws an {{UnsuccessfulExecutionException}}. The 
> reason is that this exception also includes the status of the application.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20070) NPE in SourceCoordinatorProviderTest.testCheckpointAndReset

2020-11-09 Thread godfrey he (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229020#comment-17229020
 ] 

godfrey he commented on FLINK-20070:


cc [~becket_qin]

> NPE in SourceCoordinatorProviderTest.testCheckpointAndReset
> ---
>
> Key: FLINK-20070
> URL: https://issues.apache.org/jira/browse/FLINK-20070
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Reporter: godfrey he
>Priority: Major
> Fix For: 1.12.0
>
>
> https://dev.azure.com/godfreyhe/c147b7ad-1708-46c3-9021-cc523e50c4d5/_apis/build/builds/71/logs/114
> {code:java}
> 2020-11-10T03:41:10.8231846Z [INFO] Running 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorContextTest
> 2020-11-10T03:41:11.2510061Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 1.171 s <<< FAILURE! - in 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest
> 2020-11-10T03:41:11.2511837Z [ERROR] 
> testCheckpointAndReset(org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest)
>   Time elapsed: 1.055 s  <<< ERROR!
> 2020-11-10T03:41:11.2512610Z java.lang.NullPointerException
> 2020-11-10T03:41:11.2513268Z  at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest.testCheckpointAndReset(SourceCoordinatorProviderTest.java:94)
> 2020-11-10T03:41:11.2513967Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-10T03:41:11.2514553Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-10T03:41:11.2515230Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-10T03:41:11.2515827Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-10T03:41:11.2516428Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-10T03:41:11.2517107Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-10T03:41:11.2517757Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-10T03:41:11.2518431Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-10T03:41:11.2519082Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-11-10T03:41:11.2519677Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-11-10T03:41:11.2520292Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-11-10T03:41:11.2521100Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-11-10T03:41:11.2521831Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-10T03:41:11.2522420Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-10T03:41:11.2522988Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-10T03:41:11.2523582Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-10T03:41:11.2524165Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-10T03:41:11.2524951Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-10T03:41:11.2525570Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-11-10T03:41:11.2526288Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-11-10T03:41:11.2526969Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-11-10T03:41:11.2527742Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-11-10T03:41:11.2528467Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-11-10T03:41:11.2529169Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-11-10T03:41:11.2529844Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> 2020-11-10T03:41:11.2530480Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19982) AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg fails with "RuntimeException: Job restarted"

2020-11-09 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229019#comment-17229019
 ] 

Jark Wu commented on FLINK-19982:
-

Hi [~TsReaper], could you help to look into this problem? The exception is 
thrown from {{UncheckpointedCollectResultBuffer}}.

> AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg fails with 
> "RuntimeException: Job restarted"
> ---
>
> Key: FLINK-19982
> URL: https://issues.apache.org/jira/browse/FLINK-19982
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/georgeryan1322/Flink/_build/results?buildId=336=logs=a1590513-d0ea-59c3-3c7b-aad756c48f25=5129dea2-618b-5c74-1b8f-9ec63a37a8a6
> {code}
> [ERROR] Tests run: 16, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
> 59.688 s <<< FAILURE! - in 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase
> [ERROR] 
> testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase)
>   Time elapsed: 2.789 s  <<< ERROR!
> java.lang.RuntimeException: Job restarted
>   at 
> org.apache.flink.streaming.api.operators.collect.UncheckpointedCollectResultBuffer.sinkRestarted(UncheckpointedCollectResultBuffer.java:41)
>   at 
> org.apache.flink.streaming.api.operators.collect.AbstractCollectResultBuffer.dealWithResponse(AbstractCollectResultBuffer.java:87)
>   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:127)
>   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:103)
>   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77)
>   at 
> org.apache.flink.table.planner.sinks.SelectTableSinkBase$RowIteratorWrapper.hasNext(SelectTableSinkBase.java:115)
>   at 
> org.apache.flink.table.api.internal.TableResultImpl$CloseableRowIteratorWrapper.hasNext(TableResultImpl.java:355)
>   at java.util.Iterator.forEachRemaining(Iterator.java:115)
>   at 
> org.apache.flink.util.CollectionUtil.iteratorToList(CollectionUtil.java:114)
>   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:298)
>   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:138)
>   at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:104)
>   at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:153)
>   at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}
> In the logs, I find occurrences of this:
> {code}
> 16:37:49,262 [main] WARN  
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher [] - An 
> exception occurs when fetching query results
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.dispatcher.UnavailableDispatcherOperationException: 
> Unable to get JobMasterGateway for initializing job. The requested operation 
> is not available while the JobManager is initializing.
>   at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357) 
> ~[?:1.8.0_242]
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) 
> ~[?:1.8.0_242]
>   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.sendRequest(CollectResultFetcher.java:163)
>  ~[flink-streaming-java_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
>   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultFetcher.next(CollectResultFetcher.java:134)
>  [flink-streaming-java_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
>   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:103)
>  [flink-streaming-java_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
>   at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:77)
>  [flink-streaming-java_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
>   at 
> org.apache.flink.table.planner.sinks.SelectTableSinkBase$RowIteratorWrapper.hasNext(SelectTableSinkBase.java:115)
>  [classes/:?]
>   at 
> 

[jira] [Created] (FLINK-20070) NPE in SourceCoordinatorProviderTest.testCheckpointAndReset

2020-11-09 Thread godfrey he (Jira)
godfrey he created FLINK-20070:
--

 Summary: NPE in 
SourceCoordinatorProviderTest.testCheckpointAndReset
 Key: FLINK-20070
 URL: https://issues.apache.org/jira/browse/FLINK-20070
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common
Reporter: godfrey he
 Fix For: 1.12.0


https://dev.azure.com/godfreyhe/c147b7ad-1708-46c3-9021-cc523e50c4d5/_apis/build/builds/71/logs/114


{code:java}
2020-11-10T03:41:10.8231846Z [INFO] Running 
org.apache.flink.runtime.source.coordinator.SourceCoordinatorContextTest
2020-11-10T03:41:11.2510061Z [ERROR] Tests run: 2, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 1.171 s <<< FAILURE! - in 
org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest
2020-11-10T03:41:11.2511837Z [ERROR] 
testCheckpointAndReset(org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest)
  Time elapsed: 1.055 s  <<< ERROR!
2020-11-10T03:41:11.2512610Z java.lang.NullPointerException
2020-11-10T03:41:11.2513268Zat 
org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest.testCheckpointAndReset(SourceCoordinatorProviderTest.java:94)
2020-11-10T03:41:11.2513967Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-11-10T03:41:11.2514553Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-11-10T03:41:11.2515230Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-11-10T03:41:11.2515827Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-11-10T03:41:11.2516428Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-11-10T03:41:11.2517107Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-11-10T03:41:11.2517757Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-11-10T03:41:11.2518431Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-11-10T03:41:11.2519082Zat 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2020-11-10T03:41:11.2519677Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2020-11-10T03:41:11.2520292Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2020-11-10T03:41:11.2521100Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2020-11-10T03:41:11.2521831Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2020-11-10T03:41:11.2522420Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2020-11-10T03:41:11.2522988Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2020-11-10T03:41:11.2523582Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2020-11-10T03:41:11.2524165Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2020-11-10T03:41:11.2524951Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2020-11-10T03:41:11.2525570Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2020-11-10T03:41:11.2526288Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2020-11-10T03:41:11.2526969Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
2020-11-10T03:41:11.2527742Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
2020-11-10T03:41:11.2528467Zat 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
2020-11-10T03:41:11.2529169Zat 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
2020-11-10T03:41:11.2529844Zat 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
2020-11-10T03:41:11.2530480Zat 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-19253) SourceReaderTestBase.testAddSplitToExistingFetcher hangs

2020-11-09 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin reassigned FLINK-19253:


Assignee: Xuannan Su

> SourceReaderTestBase.testAddSplitToExistingFetcher hangs
> 
>
> Key: FLINK-19253
> URL: https://issues.apache.org/jira/browse/FLINK-19253
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Xuannan Su
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6521=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code}
> 2020-09-15T10:51:35.5236837Z "SourceFetcher" #39 prio=5 os_prio=0 
> tid=0x7f70d0a57000 nid=0x858 in Object.wait() [0x7f6fd81f]
> 2020-09-15T10:51:35.5237447Zjava.lang.Thread.State: WAITING (on object 
> monitor)
> 2020-09-15T10:51:35.5237962Z  at java.lang.Object.wait(Native Method)
> 2020-09-15T10:51:35.5238886Z  - waiting on <0xc27f5be8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5239380Z  at java.lang.Object.wait(Object.java:502)
> 2020-09-15T10:51:35.5240401Z  at 
> org.apache.flink.connector.base.source.reader.mocks.TestingSplitReader.fetch(TestingSplitReader.java:52)
> 2020-09-15T10:51:35.5241471Z  - locked <0xc27f5be8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5242180Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
> 2020-09-15T10:51:35.5243245Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:128)
> 2020-09-15T10:51:35.5244263Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:95)
> 2020-09-15T10:51:35.5245128Z  at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:42)
> 2020-09-15T10:51:35.5245973Z  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2020-09-15T10:51:35.5247081Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-09-15T10:51:35.5247816Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 2020-09-15T10:51:35.5248809Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-09-15T10:51:35.5249463Z  at java.lang.Thread.run(Thread.java:748)
> 2020-09-15T10:51:35.5249827Z 
> 2020-09-15T10:51:35.5250383Z "SourceFetcher" #37 prio=5 os_prio=0 
> tid=0x7f70d0a4b000 nid=0x856 in Object.wait() [0x7f6f80cfa000]
> 2020-09-15T10:51:35.5251124Zjava.lang.Thread.State: WAITING (on object 
> monitor)
> 2020-09-15T10:51:35.5251636Z  at java.lang.Object.wait(Native Method)
> 2020-09-15T10:51:35.5252767Z  - waiting on <0xc298d0b8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5253336Z  at java.lang.Object.wait(Object.java:502)
> 2020-09-15T10:51:35.5254184Z  at 
> org.apache.flink.connector.base.source.reader.mocks.TestingSplitReader.fetch(TestingSplitReader.java:52)
> 2020-09-15T10:51:35.5255220Z  - locked <0xc298d0b8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5255678Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
> 2020-09-15T10:51:35.5256235Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:128)
> 2020-09-15T10:51:35.5256803Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:95)
> 2020-09-15T10:51:35.5257351Z  at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:42)
> 2020-09-15T10:51:35.5257838Z  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2020-09-15T10:51:35.5258284Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-09-15T10:51:35.5258856Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 2020-09-15T10:51:35.5259350Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-09-15T10:51:35.5260011Z  at java.lang.Thread.run(Thread.java:748)
> 2020-09-15T10:51:35.5260211Z 
> 2020-09-15T10:51:35.5260574Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7f6f70042000 nid=0x844 waiting on condition [0x7f6fd832a000]
> 2020-09-15T10:51:35.5261036Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-09-15T10:51:35.5261342Z  at sun.misc.Unsafe.park(Native Method)
> 2020-09-15T10:51:35.5261972Z  - parking to wait for  <0x815d0810> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-09-15T10:51:35.5262456Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
> 

[jira] [Created] (FLINK-20069) docs_404_check doesn't work properly

2020-11-09 Thread Dian Fu (Jira)
Dian Fu created FLINK-20069:
---

 Summary: docs_404_check doesn't work properly
 Key: FLINK-20069
 URL: https://issues.apache.org/jira/browse/FLINK-20069
 Project: Flink
  Issue Type: Improvement
  Components: Build System
Affects Versions: 1.11.0, 1.12.0
Reporter: Dian Fu


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9361=logs=6dc02e5c-5865-5c6a-c6c5-92d598e3fc43

{code}
Starting: CmdLine
==
Task : Command line
Description  : Run a command line script using Bash on Linux and macOS and 
cmd.exe on Windows
Version  : 2.177.3
Author   : Microsoft Corporation
Help : 
https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/command-line
==
Generating script.
Script contents:
exec ./tools/ci/docs.sh
== Starting Command Output ===
/bin/bash --noprofile --norc 
/home/vsts/work/_temp/71629bf0-181a-4981-a18d-44e3c94229f1.sh
Waiting for server...
[DEPRECATED] The `--path` flag is deprecated because it relies on being 
remembered across bundler invocations, which bundler will no longer do in 
future versions. Instead please use `bundle config set path 
'/home/vsts/gem_cache'`, and stop using this flag
Fetching gem metadata from https://rubygems.org/.
jekyll-4.0.1 requires rubygems version >= 2.7.0, which is incompatible with the
current version, 2.6.14.4
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
Waiting for server...
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19342) stop overriding convertFrom() in FlinkPlannerImpl after upgrade calcite to 1.23

2020-11-09 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu closed FLINK-19342.
--
Fix Version/s: 1.12.0
   Resolution: Fixed

fixed in FLINK-16579

> stop overriding convertFrom() in FlinkPlannerImpl after upgrade calcite to 
> 1.23
> ---
>
> Key: FLINK-19342
> URL: https://issues.apache.org/jira/browse/FLINK-19342
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Leonard Xu
>Priority: Major
> Fix For: 1.12.0
>
>
> In FLINK-18548, we override `convertFrom() in FlinkPlannerImpl as a 
> workaround to support  flexible temporal join syntax, however this feature 
> has been supported in calcite 1.23,  we should stop override once we upgrade 
> the calcite,



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 edited a comment on pull request #13048: [FLINK-18755][docs-zh] RabbitMQ QoS Chinese Documentation

2020-11-09 Thread GitBox


klion26 edited a comment on pull request #13048:
URL: https://github.com/apache/flink/pull/13048#issuecomment-724502400


   @rmetzger seems we'have created a tag 1.12.0-rc1(no 1.12-xx branch), after 
merged this commit into master, do I have to do anything else? (etc. merge into 
some other branches?)
   I'll close the related jira and update the commit message there.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on pull request #13048: [FLINK-18755][docs-zh] RabbitMQ QoS Chinese Documentation

2020-11-09 Thread GitBox


klion26 commented on pull request #13048:
URL: https://github.com/apache/flink/pull/13048#issuecomment-724502400


   @rmetzger seems we'have created a tag 1.12.0-rc1(no 1.12-xx branch), after 
merged this commit into master, do I have to do anything else? (etc. merge into 
some other branches?)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20050) SourceCoordinatorProviderTest.testCheckpointAndReset failed with NullPointerException

2020-11-09 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229014#comment-17229014
 ] 

Jiangjie Qin commented on FLINK-20050:
--

Merged to master:  ae09f9be438736763db129937bb7cc70e37fd429

> SourceCoordinatorProviderTest.testCheckpointAndReset failed with 
> NullPointerException
> -
>
> Key: FLINK-20050
> URL: https://issues.apache.org/jira/browse/FLINK-20050
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9322=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392
> {code}
> 2020-11-08T22:24:39.5642544Z [ERROR] 
> testCheckpointAndReset(org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest)
>   Time elapsed: 0.954 s  <<< ERROR!
> 2020-11-08T22:24:39.5643055Z java.lang.NullPointerException
> 2020-11-08T22:24:39.5643578Z  at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest.testCheckpointAndReset(SourceCoordinatorProviderTest.java:94)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-20050) SourceCoordinatorProviderTest.testCheckpointAndReset failed with NullPointerException

2020-11-09 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin resolved FLINK-20050.
--
Resolution: Fixed

> SourceCoordinatorProviderTest.testCheckpointAndReset failed with 
> NullPointerException
> -
>
> Key: FLINK-20050
> URL: https://issues.apache.org/jira/browse/FLINK-20050
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9322=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392
> {code}
> 2020-11-08T22:24:39.5642544Z [ERROR] 
> testCheckpointAndReset(org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest)
>   Time elapsed: 0.954 s  <<< ERROR!
> 2020-11-08T22:24:39.5643055Z java.lang.NullPointerException
> 2020-11-08T22:24:39.5643578Z  at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest.testCheckpointAndReset(SourceCoordinatorProviderTest.java:94)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19253) SourceReaderTestBase.testAddSplitToExistingFetcher hangs

2020-11-09 Thread Xuannan Su (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229013#comment-17229013
 ] 

Xuannan Su commented on FLINK-19253:


[~becket_qin] I'd love to submit a fix. Could you assign the ticket to me?

> SourceReaderTestBase.testAddSplitToExistingFetcher hangs
> 
>
> Key: FLINK-19253
> URL: https://issues.apache.org/jira/browse/FLINK-19253
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6521=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code}
> 2020-09-15T10:51:35.5236837Z "SourceFetcher" #39 prio=5 os_prio=0 
> tid=0x7f70d0a57000 nid=0x858 in Object.wait() [0x7f6fd81f]
> 2020-09-15T10:51:35.5237447Zjava.lang.Thread.State: WAITING (on object 
> monitor)
> 2020-09-15T10:51:35.5237962Z  at java.lang.Object.wait(Native Method)
> 2020-09-15T10:51:35.5238886Z  - waiting on <0xc27f5be8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5239380Z  at java.lang.Object.wait(Object.java:502)
> 2020-09-15T10:51:35.5240401Z  at 
> org.apache.flink.connector.base.source.reader.mocks.TestingSplitReader.fetch(TestingSplitReader.java:52)
> 2020-09-15T10:51:35.5241471Z  - locked <0xc27f5be8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5242180Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
> 2020-09-15T10:51:35.5243245Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:128)
> 2020-09-15T10:51:35.5244263Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:95)
> 2020-09-15T10:51:35.5245128Z  at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:42)
> 2020-09-15T10:51:35.5245973Z  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2020-09-15T10:51:35.5247081Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-09-15T10:51:35.5247816Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 2020-09-15T10:51:35.5248809Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-09-15T10:51:35.5249463Z  at java.lang.Thread.run(Thread.java:748)
> 2020-09-15T10:51:35.5249827Z 
> 2020-09-15T10:51:35.5250383Z "SourceFetcher" #37 prio=5 os_prio=0 
> tid=0x7f70d0a4b000 nid=0x856 in Object.wait() [0x7f6f80cfa000]
> 2020-09-15T10:51:35.5251124Zjava.lang.Thread.State: WAITING (on object 
> monitor)
> 2020-09-15T10:51:35.5251636Z  at java.lang.Object.wait(Native Method)
> 2020-09-15T10:51:35.5252767Z  - waiting on <0xc298d0b8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5253336Z  at java.lang.Object.wait(Object.java:502)
> 2020-09-15T10:51:35.5254184Z  at 
> org.apache.flink.connector.base.source.reader.mocks.TestingSplitReader.fetch(TestingSplitReader.java:52)
> 2020-09-15T10:51:35.5255220Z  - locked <0xc298d0b8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5255678Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
> 2020-09-15T10:51:35.5256235Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:128)
> 2020-09-15T10:51:35.5256803Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:95)
> 2020-09-15T10:51:35.5257351Z  at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:42)
> 2020-09-15T10:51:35.5257838Z  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2020-09-15T10:51:35.5258284Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-09-15T10:51:35.5258856Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 2020-09-15T10:51:35.5259350Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-09-15T10:51:35.5260011Z  at java.lang.Thread.run(Thread.java:748)
> 2020-09-15T10:51:35.5260211Z 
> 2020-09-15T10:51:35.5260574Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7f6f70042000 nid=0x844 waiting on condition [0x7f6fd832a000]
> 2020-09-15T10:51:35.5261036Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-09-15T10:51:35.5261342Z  at sun.misc.Unsafe.park(Native Method)
> 2020-09-15T10:51:35.5261972Z  - parking to wait for  <0x815d0810> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-09-15T10:51:35.5262456Z  at 
> 

[GitHub] [flink] becketqin commented on pull request #14001: [FLINK-20050][runtime/operator] Fix methods that are only visible for…

2020-11-09 Thread GitBox


becketqin commented on pull request #14001:
URL: https://github.com/apache/flink/pull/14001#issuecomment-724500658


   Thanks for the review. @Sxnan 
   Merged to master: ae09f9be438736763db129937bb7cc70e37fd429



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin closed pull request #14001: [FLINK-20050][runtime/operator] Fix methods that are only visible for…

2020-11-09 Thread GitBox


becketqin closed pull request #14001:
URL: https://github.com/apache/flink/pull/14001


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 merged pull request #13048: [FLINK-18755][docs-zh] RabbitMQ QoS Chinese Documentation

2020-11-09 Thread GitBox


klion26 merged pull request #13048:
URL: https://github.com/apache/flink/pull/13048


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14009: [FLINK-20064][docs] Fix the broken links

2020-11-09 Thread GitBox


flinkbot commented on pull request #14009:
URL: https://github.com/apache/flink/pull/14009#issuecomment-724499700


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit af7a44f12f20bf3a2a02d007128bb1f57e651a83 (Tue Nov 10 
06:47:55 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19253) SourceReaderTestBase.testAddSplitToExistingFetcher hangs

2020-11-09 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229011#comment-17229011
 ] 

Jiangjie Qin commented on FLINK-19253:
--

[~xuannan] Good catch. Will you submit a fix?

> SourceReaderTestBase.testAddSplitToExistingFetcher hangs
> 
>
> Key: FLINK-19253
> URL: https://issues.apache.org/jira/browse/FLINK-19253
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6521=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code}
> 2020-09-15T10:51:35.5236837Z "SourceFetcher" #39 prio=5 os_prio=0 
> tid=0x7f70d0a57000 nid=0x858 in Object.wait() [0x7f6fd81f]
> 2020-09-15T10:51:35.5237447Zjava.lang.Thread.State: WAITING (on object 
> monitor)
> 2020-09-15T10:51:35.5237962Z  at java.lang.Object.wait(Native Method)
> 2020-09-15T10:51:35.5238886Z  - waiting on <0xc27f5be8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5239380Z  at java.lang.Object.wait(Object.java:502)
> 2020-09-15T10:51:35.5240401Z  at 
> org.apache.flink.connector.base.source.reader.mocks.TestingSplitReader.fetch(TestingSplitReader.java:52)
> 2020-09-15T10:51:35.5241471Z  - locked <0xc27f5be8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5242180Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
> 2020-09-15T10:51:35.5243245Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:128)
> 2020-09-15T10:51:35.5244263Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:95)
> 2020-09-15T10:51:35.5245128Z  at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:42)
> 2020-09-15T10:51:35.5245973Z  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2020-09-15T10:51:35.5247081Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-09-15T10:51:35.5247816Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 2020-09-15T10:51:35.5248809Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-09-15T10:51:35.5249463Z  at java.lang.Thread.run(Thread.java:748)
> 2020-09-15T10:51:35.5249827Z 
> 2020-09-15T10:51:35.5250383Z "SourceFetcher" #37 prio=5 os_prio=0 
> tid=0x7f70d0a4b000 nid=0x856 in Object.wait() [0x7f6f80cfa000]
> 2020-09-15T10:51:35.5251124Zjava.lang.Thread.State: WAITING (on object 
> monitor)
> 2020-09-15T10:51:35.5251636Z  at java.lang.Object.wait(Native Method)
> 2020-09-15T10:51:35.5252767Z  - waiting on <0xc298d0b8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5253336Z  at java.lang.Object.wait(Object.java:502)
> 2020-09-15T10:51:35.5254184Z  at 
> org.apache.flink.connector.base.source.reader.mocks.TestingSplitReader.fetch(TestingSplitReader.java:52)
> 2020-09-15T10:51:35.5255220Z  - locked <0xc298d0b8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5255678Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
> 2020-09-15T10:51:35.5256235Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:128)
> 2020-09-15T10:51:35.5256803Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:95)
> 2020-09-15T10:51:35.5257351Z  at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:42)
> 2020-09-15T10:51:35.5257838Z  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2020-09-15T10:51:35.5258284Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-09-15T10:51:35.5258856Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 2020-09-15T10:51:35.5259350Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-09-15T10:51:35.5260011Z  at java.lang.Thread.run(Thread.java:748)
> 2020-09-15T10:51:35.5260211Z 
> 2020-09-15T10:51:35.5260574Z "process reaper" #24 daemon prio=10 os_prio=0 
> tid=0x7f6f70042000 nid=0x844 waiting on condition [0x7f6fd832a000]
> 2020-09-15T10:51:35.5261036Zjava.lang.Thread.State: TIMED_WAITING 
> (parking)
> 2020-09-15T10:51:35.5261342Z  at sun.misc.Unsafe.park(Native Method)
> 2020-09-15T10:51:35.5261972Z  - parking to wait for  <0x815d0810> (a 
> java.util.concurrent.SynchronousQueue$TransferStack)
> 2020-09-15T10:51:35.5262456Z  at 
> java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)

[GitHub] [flink] klion26 commented on pull request #13048: [FLINK-18755][docs-zh] RabbitMQ QoS Chinese Documentation

2020-11-09 Thread GitBox


klion26 commented on pull request #13048:
URL: https://github.com/apache/flink/pull/13048#issuecomment-724499075


   @adiaixin thanks for the work, LGTM, merging



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20064) Broken links in the documentation

2020-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20064:
---
Labels: pull-request-available  (was: )

> Broken links in the documentation
> -
>
> Key: FLINK-20064
> URL: https://issues.apache.org/jira/browse/FLINK-20064
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.12.0
>Reporter: Seth Wiesman
>Assignee: Dian Fu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> http://localhost:4000/api/java/:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/dev/python/table-api-users-guide/streaming:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/dev/python/table-api-users-guide/streaming/time_attributes.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/dev/python/table-api-users-guide/streaming/query_configuration.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/dev/python/table-api-users-guide/streaming/temporal_tables.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/dev/python/table-api-users-guide/common.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/dev/python/table-api-users-guide/tableApi.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/dev/python/table-api-users-guide/types.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/dev/connectors/filesystem_sink.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/api/java/org/apache/flink/types/RowKind.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/dev/python/table-api-users-guide/streaming:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/dev/python/table-api-users-guide/streaming/time_attributes.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/dev/python/table-api-users-guide/streaming/query_configuration.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/dev/python/table-api-users-guide/streaming/temporal_tables.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/dev/python/table-api-users-guide/common.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/dev/python/table-api-users-guide/tableApi.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/dev/python/table-api-users-guide/types.html:
> Remote file does not exist -- broken link!!!
> --
> http://localhost:4000/zh/dev/connectors/filesystem_sink.html:
> Remote file does not exist -- broken link!!!
> ---
> Found 18 broken links.
> Search for page containing broken link using 'grep -R BROKEN_PATH DOCS_DIR'



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu opened a new pull request #14009: [FLINK-20064][docs] Fix the broken links

2020-11-09 Thread GitBox


dianfu opened a new pull request #14009:
URL: https://github.com/apache/flink/pull/14009


   
   ## What is the purpose of the change
   
   *This pull request fixes the broken links in the documentation.*
   
   ## Verifying this change
   
   Verified it manually by running ./check_links.sh
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14008: [FLINK-20013][network] BoundedBlockingSubpartition may leak network buffer if task is failed or canceled

2020-11-09 Thread GitBox


flinkbot commented on pull request #14008:
URL: https://github.com/apache/flink/pull/14008#issuecomment-724493215


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 409377bf03bf344f259a1386a225ea33b510626c (Tue Nov 10 
06:38:54 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas opened a new pull request #14008: [FLINK-20013][network] BoundedBlockingSubpartition may leak network buffer if task is failed or canceled

2020-11-09 Thread GitBox


SteNicholas opened a new pull request #14008:
URL: https://github.com/apache/flink/pull/14008


   ## What is the purpose of the change
   
   *`BoundedBlockingSubpartition` may leak network buffer if task is failed or 
canceled. `BoundedBlockingSubpartition` needs to add the close of current 
buffer to recycle the current BufferConsumer when task is failed or canceled.*
   
   ## Brief change log
   
 - *`BoundedBlockingSubpartition#close()` adds the close of the current 
BufferConsumer for recycle to avoid network buffer leakage.*
   
   ## Verifying this change
   
 - *`BoundedBlockingSubpartitionTest` adds 
`testRecycleCurrentBufferOnFailure` test method to verify the test case whether 
to recycle the current BufferConsumer when task is failed or canceled.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20013) BoundedBlockingSubpartition may leak network buffer if task is failed or canceled

2020-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20013?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20013:
---
Labels: pull-request-available  (was: )

> BoundedBlockingSubpartition may leak network buffer if task is failed or 
> canceled
> -
>
> Key: FLINK-20013
> URL: https://issues.apache.org/jira/browse/FLINK-20013
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Reporter: Yingjie Cao
>Assignee: Nicholas Jiang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> BoundedBlockingSubpartition may leak network buffer if task is failed or 
> canceled. We need to recycle the current BufferConsumer when task is failed 
> or canceled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 commented on pull request #13048: [FLINK-18755][docs-zh] RabbitMQ QoS Chinese Documentation

2020-11-09 Thread GitBox


klion26 commented on pull request #13048:
URL: https://github.com/apache/flink/pull/13048#issuecomment-724489759


   @rmetzger thanks for the reminder, will take a look at this 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-19298) Maven enforce goal dependency-convergence failed on flink-json

2020-11-09 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19298?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-19298.
---
Resolution: Cannot Reproduce

> Maven enforce goal dependency-convergence failed on flink-json
> --
>
> Key: FLINK-19298
> URL: https://issues.apache.org/jira/browse/FLINK-19298
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.0
>Reporter: Jark Wu
>Priority: Critical
>
> See more 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6669=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=9b1a0f88-517b-5893-fc93-76f4670982b4
> {code}
> 2020-09-20T17:08:16.0930669Z 17:08:16.092 [INFO] --- 
> maven-enforcer-plugin:3.0.0-M1:enforce (dependency-convergence) @ flink-json 
> ---
> 2020-09-20T17:08:16.1089006Z 17:08:16.103 [WARNING] 
> 2020-09-20T17:08:16.1089561Z Dependency convergence error for 
> com.google.guava:guava:19.0 paths to dependency are:
> 2020-09-20T17:08:16.1090432Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1091072Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1091670Z +-com.google.guava:guava:19.0
> 2020-09-20T17:08:16.1092014Z and
> 2020-09-20T17:08:16.1092496Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1093322Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1093926Z +-org.apache.calcite:calcite-core:1.22.0
> 2020-09-20T17:08:16.1094521Z   +-org.apache.calcite:calcite-linq4j:1.22.0
> 2020-09-20T17:08:16.1095076Z +-com.google.guava:guava:19.0
> 2020-09-20T17:08:16.1095441Z and
> 2020-09-20T17:08:16.1095927Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1096726Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1097419Z +-org.apache.calcite:calcite-core:1.22.0
> 2020-09-20T17:08:16.1098042Z   +-com.google.guava:guava:19.0
> 2020-09-20T17:08:16.1098435Z and
> 2020-09-20T17:08:16.1098984Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1099700Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1100359Z +-com.google.guava:guava:19.0
> 2020-09-20T17:08:16.1100749Z and
> 2020-09-20T17:08:16.1101293Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1128892Z   
> +-org.apache.flink:flink-test-utils_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1129766Z +-org.apache.curator:curator-test:2.12.0
> 2020-09-20T17:08:16.1130466Z   +-com.google.guava:guava:16.0.1
> 2020-09-20T17:08:16.1130843Z 
> 2020-09-20T17:08:16.1131224Z 17:08:16.109 [WARNING] 
> 2020-09-20T17:08:16.1132069Z Dependency convergence error for 
> org.codehaus.janino:commons-compiler:3.0.9 paths to dependency are:
> 2020-09-20T17:08:16.1133127Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1133906Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1134663Z +-org.codehaus.janino:commons-compiler:3.0.9
> 2020-09-20T17:08:16.1135224Z and
> 2020-09-20T17:08:16.1135772Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1136487Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1137150Z +-org.codehaus.janino:janino:3.0.9
> 2020-09-20T17:08:16.1137825Z   
> +-org.codehaus.janino:commons-compiler:3.0.9
> 2020-09-20T17:08:16.1138250Z and
> 2020-09-20T17:08:16.1138798Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1139514Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1141028Z +-org.apache.calcite:calcite-core:1.22.0
> 2020-09-20T17:08:16.1141782Z   
> +-org.codehaus.janino:commons-compiler:3.0.11
> 2020-09-20T17:08:16.1142140Z and
> 2020-09-20T17:08:16.1142635Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1143270Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1143913Z +-org.codehaus.janino:commons-compiler:3.0.9
> 2020-09-20T17:08:16.1144215Z 
> 2020-09-20T17:08:16.1144498Z 17:08:16.111 [WARNING] 
> 2020-09-20T17:08:16.1144944Z Dependency convergence error for 
> org.codehaus.janino:janino:3.0.9 paths to dependency are:
> 2020-09-20T17:08:16.1145609Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1146233Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1146852Z +-org.codehaus.janino:janino:3.0.9
> 2020-09-20T17:08:16.1147166Z and
> 2020-09-20T17:08:16.1147654Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1148298Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT

[jira] [Commented] (FLINK-19298) Maven enforce goal dependency-convergence failed on flink-json

2020-11-09 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229009#comment-17229009
 ] 

Jark Wu commented on FLINK-19298:
-

Sure. I think this might be fixed by other tickets. 

> Maven enforce goal dependency-convergence failed on flink-json
> --
>
> Key: FLINK-19298
> URL: https://issues.apache.org/jira/browse/FLINK-19298
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.12.0
>Reporter: Jark Wu
>Priority: Critical
>
> See more 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6669=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=9b1a0f88-517b-5893-fc93-76f4670982b4
> {code}
> 2020-09-20T17:08:16.0930669Z 17:08:16.092 [INFO] --- 
> maven-enforcer-plugin:3.0.0-M1:enforce (dependency-convergence) @ flink-json 
> ---
> 2020-09-20T17:08:16.1089006Z 17:08:16.103 [WARNING] 
> 2020-09-20T17:08:16.1089561Z Dependency convergence error for 
> com.google.guava:guava:19.0 paths to dependency are:
> 2020-09-20T17:08:16.1090432Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1091072Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1091670Z +-com.google.guava:guava:19.0
> 2020-09-20T17:08:16.1092014Z and
> 2020-09-20T17:08:16.1092496Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1093322Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1093926Z +-org.apache.calcite:calcite-core:1.22.0
> 2020-09-20T17:08:16.1094521Z   +-org.apache.calcite:calcite-linq4j:1.22.0
> 2020-09-20T17:08:16.1095076Z +-com.google.guava:guava:19.0
> 2020-09-20T17:08:16.1095441Z and
> 2020-09-20T17:08:16.1095927Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1096726Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1097419Z +-org.apache.calcite:calcite-core:1.22.0
> 2020-09-20T17:08:16.1098042Z   +-com.google.guava:guava:19.0
> 2020-09-20T17:08:16.1098435Z and
> 2020-09-20T17:08:16.1098984Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1099700Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1100359Z +-com.google.guava:guava:19.0
> 2020-09-20T17:08:16.1100749Z and
> 2020-09-20T17:08:16.1101293Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1128892Z   
> +-org.apache.flink:flink-test-utils_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1129766Z +-org.apache.curator:curator-test:2.12.0
> 2020-09-20T17:08:16.1130466Z   +-com.google.guava:guava:16.0.1
> 2020-09-20T17:08:16.1130843Z 
> 2020-09-20T17:08:16.1131224Z 17:08:16.109 [WARNING] 
> 2020-09-20T17:08:16.1132069Z Dependency convergence error for 
> org.codehaus.janino:commons-compiler:3.0.9 paths to dependency are:
> 2020-09-20T17:08:16.1133127Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1133906Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1134663Z +-org.codehaus.janino:commons-compiler:3.0.9
> 2020-09-20T17:08:16.1135224Z and
> 2020-09-20T17:08:16.1135772Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1136487Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1137150Z +-org.codehaus.janino:janino:3.0.9
> 2020-09-20T17:08:16.1137825Z   
> +-org.codehaus.janino:commons-compiler:3.0.9
> 2020-09-20T17:08:16.1138250Z and
> 2020-09-20T17:08:16.1138798Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1139514Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1141028Z +-org.apache.calcite:calcite-core:1.22.0
> 2020-09-20T17:08:16.1141782Z   
> +-org.codehaus.janino:commons-compiler:3.0.11
> 2020-09-20T17:08:16.1142140Z and
> 2020-09-20T17:08:16.1142635Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1143270Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1143913Z +-org.codehaus.janino:commons-compiler:3.0.9
> 2020-09-20T17:08:16.1144215Z 
> 2020-09-20T17:08:16.1144498Z 17:08:16.111 [WARNING] 
> 2020-09-20T17:08:16.1144944Z Dependency convergence error for 
> org.codehaus.janino:janino:3.0.9 paths to dependency are:
> 2020-09-20T17:08:16.1145609Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1146233Z   
> +-org.apache.flink:flink-table-planner-blink_2.11:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1146852Z +-org.codehaus.janino:janino:3.0.9
> 2020-09-20T17:08:16.1147166Z and
> 2020-09-20T17:08:16.1147654Z +-org.apache.flink:flink-json:1.12-SNAPSHOT
> 2020-09-20T17:08:16.1148298Z   
> 

[jira] [Commented] (FLINK-13733) FlinkKafkaInternalProducerITCase.testHappyPath fails on Travis

2020-11-09 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229008#comment-17229008
 ] 

Robert Metzger commented on FLINK-13733:


[~renqs] what's the status of the PR increasing the timeout?

> FlinkKafkaInternalProducerITCase.testHappyPath fails on Travis
> --
>
> Key: FLINK-13733
> URL: https://issues.apache.org/jira/browse/FLINK-13733
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.9.0, 1.10.0, 1.11.0, 1.12.0
>Reporter: Till Rohrmann
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
> Attachments: 20200421.13.tar.gz
>
>
> The {{FlinkKafkaInternalProducerITCase.testHappyPath}} fails on Travis with 
> {code}
> Test 
> testHappyPath(org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase)
>  failed with:
> java.util.NoSuchElementException
>   at 
> org.apache.kafka.common.utils.AbstractIterator.next(AbstractIterator.java:52)
>   at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.getOnlyElement(Iterators.java:302)
>   at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Iterables.getOnlyElement(Iterables.java:289)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.assertRecord(FlinkKafkaInternalProducerITCase.java:169)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.testHappyPath(FlinkKafkaInternalProducerITCase.java:70)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}
> https://api.travis-ci.org/v3/job/571870358/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13997: [FLINK-20058][kafka connector] Improve tests for per-partition-waterm…

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13997:
URL: https://github.com/apache/flink/pull/13997#issuecomment-723972119


   
   ## CI report:
   
   * 40ea9fbd0871d09170da50217c4057cc81ff1203 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9370)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9348)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13990: [FLINK-20053][table][doc] Add document for file compaction

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13990:
URL: https://github.com/apache/flink/pull/13990#issuecomment-723733200


   
   ## CI report:
   
   * bcabafeacefca7370b3c2f3569ff8608704a282b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9331)
 
   * 5a9da943be11b60050d27af2c1fc3887b0a2d5e1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9379)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19436) TPC-DS end-to-end test (Blink planner) failed during shutdown

2020-11-09 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229004#comment-17229004
 ] 

Robert Metzger commented on FLINK-19436:


I see. Thanks a lot for the detailed explanation. It seems that the TPC-DS 
tests are "stopping" all the leftover pids from previous tests.
I wonder why this error is occurring now. Maybe some other change has broken a 
cleanup mechanism.

Thanks a lot for this good analysis. I'll take a look at the PR.

> TPC-DS end-to-end test (Blink planner) failed during shutdown
> -
>
> Key: FLINK-19436
> URL: https://issues.apache.org/jira/browse/FLINK-19436
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
> Attachments: image-2020-11-10-11-08-53-199.png, 
> image-2020-11-10-11-09-20-534.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7009=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9
> {code}
> 2020-09-27T22:37:53.2236467Z Stopping taskexecutor daemon (pid: 2992) on host 
> fv-az655.
> 2020-09-27T22:37:53.4450715Z Stopping standalonesession daemon (pid: 2699) on 
> host fv-az655.
> 2020-09-27T22:37:53.8014537Z Skipping taskexecutor daemon (pid: 11173), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8019740Z Skipping taskexecutor daemon (pid: 11561), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8022857Z Skipping taskexecutor daemon (pid: 11849), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8023616Z Skipping taskexecutor daemon (pid: 12180), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8024327Z Skipping taskexecutor daemon (pid: 12950), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8025027Z Skipping taskexecutor daemon (pid: 13472), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8025727Z Skipping taskexecutor daemon (pid: 16577), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8026417Z Skipping taskexecutor daemon (pid: 16959), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8027086Z Skipping taskexecutor daemon (pid: 17250), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8027770Z Skipping taskexecutor daemon (pid: 17601), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8028400Z Stopping taskexecutor daemon (pid: 18438) on 
> host fv-az655.
> 2020-09-27T22:37:53.8029314Z 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT/bin/taskmanager.sh:
>  line 99: 18438 Terminated  "${FLINK_BIN_DIR}"/flink-daemon.sh 
> $STARTSTOP $ENTRYPOINT "${ARGS[@]}"
> 2020-09-27T22:37:53.8029895Z [FAIL] Test script contains errors.
> 2020-09-27T22:37:53.8032092Z Checking for errors...
> 2020-09-27T22:37:55.3713368Z No errors in log files.
> 2020-09-27T22:37:55.3713935Z Checking for exceptions...
> 2020-09-27T22:37:56.9046391Z No exceptions in log files.
> 2020-09-27T22:37:56.9047333Z Checking for non-empty .out files...
> 2020-09-27T22:37:56.9064402Z No non-empty .out files.
> 2020-09-27T22:37:56.9064859Z 
> 2020-09-27T22:37:56.9065588Z [FAIL] 'TPC-DS end-to-end test (Blink planner)' 
> failed after 16 minutes and 54 seconds! Test exited with exit code 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15747) Enable setting RocksDB log level from configuration

2020-11-09 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17229002#comment-17229002
 ] 

Yu Li commented on FLINK-15747:
---

[~qinjunjerry] I'm afraid not, checking the 
[PR|https://github.com/ververica/frocksdb/pull/12] we could see there're some 
unresolved problem, and the work to upgrade frocksdb to higher version 
(FLINK-14482) is also postponed due to performance regression, plus the fact 
that 1.12.0 feature freeze date has passed, we probably need to postpone this 
one to later release.

> Enable setting RocksDB log level from configuration
> ---
>
> Key: FLINK-15747
> URL: https://issues.apache.org/jira/browse/FLINK-15747
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Yu Li
>Assignee: Congxian Qiu
>Priority: Major
> Fix For: 1.12.0
>
>
> Currently to open the RocksDB local log, one has to create a customized 
> {{OptionsFactory}}, which is not quite convenient. This JIRA proposes to 
> enable setting it from configuration in flink-conf.yaml.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13990: [FLINK-20053][table][doc] Add document for file compaction

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13990:
URL: https://github.com/apache/flink/pull/13990#issuecomment-723733200


   
   ## CI report:
   
   * bcabafeacefca7370b3c2f3569ff8608704a282b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9331)
 
   * 5a9da943be11b60050d27af2c1fc3887b0a2d5e1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-14482) Bump up rocksdb version

2020-11-09 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228997#comment-17228997
 ] 

Yu Li commented on FLINK-14482:
---

Some update here: [~yunta] and I have verified that 5.18.3 with the fix of 
FLINK-19710 could get the same performance as 5.17.2, but still observed ~5% 
regression on the list/map get/iterator benchmarks with 6.x, indicating 
there're new issues introduced in the higher versions, so we decide to postpone 
the upgrade to locate the new issues. We aim at completing the upgrade to 6.x 
in 1.13.0 release.

> Bump up rocksdb version
> ---
>
> Key: FLINK-14482
> URL: https://issues.apache.org/jira/browse/FLINK-14482
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Current rocksDB-5.17.2 does not support write buffer manager well, we need to 
> bump rocksdb version to support that feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #14006: [FLINK-19546][doc] Add documentation for native Kubernetes HA

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #14006:
URL: https://github.com/apache/flink/pull/14006#issuecomment-724437611


   
   ## CI report:
   
   * 1ee231fa8559cdc63ec4956262da92496698826a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9375)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20068) KafkaSubscriberTest.testTopicPatternSubscriber failed with unexpected results

2020-11-09 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20068:

Priority: Critical  (was: Major)

> KafkaSubscriberTest.testTopicPatternSubscriber failed with unexpected results
> -
>
> Key: FLINK-20068
> URL: https://issues.apache.org/jira/browse/FLINK-20068
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9365=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5
> {code}
> 2020-11-10T00:14:22.7658242Z [ERROR] 
> testTopicPatternSubscriber(org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest)
>   Time elapsed: 0.012 s  <<< FAILURE!
> 2020-11-10T00:14:22.7659838Z java.lang.AssertionError: 
> expected:<[pattern-topic-5, pattern-topic-4, pattern-topic-7, 
> pattern-topic-6, pattern-topic-9, pattern-topic-8, pattern-topic-1, 
> pattern-topic-0, pattern-topic-3]> but was:<[]>
> 2020-11-10T00:14:22.7660740Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-11-10T00:14:22.7661245Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-11-10T00:14:22.7661788Z  at 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2020-11-10T00:14:22.7662312Z  at 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2020-11-10T00:14:22.7663051Z  at 
> org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest.testTopicPatternSubscriber(KafkaSubscriberTest.java:94)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20068) KafkaSubscriberTest.testTopicPatternSubscriber failed with unexpected results

2020-11-09 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20068?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20068:

Labels: test-stability  (was: )

> KafkaSubscriberTest.testTopicPatternSubscriber failed with unexpected results
> -
>
> Key: FLINK-20068
> URL: https://issues.apache.org/jira/browse/FLINK-20068
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9365=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5
> {code}
> 2020-11-10T00:14:22.7658242Z [ERROR] 
> testTopicPatternSubscriber(org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest)
>   Time elapsed: 0.012 s  <<< FAILURE!
> 2020-11-10T00:14:22.7659838Z java.lang.AssertionError: 
> expected:<[pattern-topic-5, pattern-topic-4, pattern-topic-7, 
> pattern-topic-6, pattern-topic-9, pattern-topic-8, pattern-topic-1, 
> pattern-topic-0, pattern-topic-3]> but was:<[]>
> 2020-11-10T00:14:22.7660740Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-11-10T00:14:22.7661245Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-11-10T00:14:22.7661788Z  at 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2020-11-10T00:14:22.7662312Z  at 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2020-11-10T00:14:22.7663051Z  at 
> org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest.testTopicPatternSubscriber(KafkaSubscriberTest.java:94)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20068) KafkaSubscriberTest.testTopicPatternSubscriber failed with unexpected results

2020-11-09 Thread Dian Fu (Jira)
Dian Fu created FLINK-20068:
---

 Summary: KafkaSubscriberTest.testTopicPatternSubscriber failed 
with unexpected results
 Key: FLINK-20068
 URL: https://issues.apache.org/jira/browse/FLINK-20068
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka
Affects Versions: 1.12.0
Reporter: Dian Fu
 Fix For: 1.12.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9365=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5

{code}
2020-11-10T00:14:22.7658242Z [ERROR] 
testTopicPatternSubscriber(org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest)
  Time elapsed: 0.012 s  <<< FAILURE!
2020-11-10T00:14:22.7659838Z java.lang.AssertionError: 
expected:<[pattern-topic-5, pattern-topic-4, pattern-topic-7, pattern-topic-6, 
pattern-topic-9, pattern-topic-8, pattern-topic-1, pattern-topic-0, 
pattern-topic-3]> but was:<[]>
2020-11-10T00:14:22.7660740Zat org.junit.Assert.fail(Assert.java:88)
2020-11-10T00:14:22.7661245Zat 
org.junit.Assert.failNotEquals(Assert.java:834)
2020-11-10T00:14:22.7661788Zat 
org.junit.Assert.assertEquals(Assert.java:118)
2020-11-10T00:14:22.7662312Zat 
org.junit.Assert.assertEquals(Assert.java:144)
2020-11-10T00:14:22.7663051Zat 
org.apache.flink.connector.kafka.source.enumerator.subscriber.KafkaSubscriberTest.testTopicPatternSubscriber(KafkaSubscriberTest.java:94)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13994: [FLINK-20054][formats] Fix ParquetInputFormat 3 level List handling

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13994:
URL: https://github.com/apache/flink/pull/13994#issuecomment-723838734


   
   ## CI report:
   
   * 3535a55f3da9d313f350ce9a7eb7c0b2b76c2fa0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9337)
 
   * 80f4c4cbefeedfce494e9f0c08ef5f5281c0bdd8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9377)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13750: [FLINK-19394][docs-zh] Translate the 'Monitoring Checkpointing' page of 'Debugging & Monitoring' into Chinese

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13750:
URL: https://github.com/apache/flink/pull/13750#issuecomment-714567487


   
   ## CI report:
   
   * 930c16e764a8e3c1ce6352ff7389cffd5c88485b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9373)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14004: [FLINK-20063][connector files] FileSourceReader request only a split if it doesn't have one already

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #14004:
URL: https://github.com/apache/flink/pull/14004#issuecomment-724364776


   
   ## CI report:
   
   * 277d5f08df4937d4b50db2f48eda1a27776cea7e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9366)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14006: [FLINK-19546][doc] Add documentation for native Kubernetes HA

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #14006:
URL: https://github.com/apache/flink/pull/14006#issuecomment-724437611


   
   ## CI report:
   
   * 1ee231fa8559cdc63ec4956262da92496698826a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9375)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13999: [FLINK-19436][tests] Properly shutdown cluster in e2e tests

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13999:
URL: https://github.com/apache/flink/pull/13999#issuecomment-724041815


   
   ## CI report:
   
   * 4ba1990236a250faa7681147f217832e499e3d95 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9355)
 
   * 3407a109e949dd38a5affc38f8adedf81a2578fc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9374)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #14007: [FLINK-19945][Connectors / FileSystem]Support sink parallelism config…

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #14007:
URL: https://github.com/apache/flink/pull/14007#issuecomment-724437781


   
   ## CI report:
   
   * 3229475639c0140db85f415a0864b8e5006cad1b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9376)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13994: [FLINK-20054][formats] Fix ParquetInputFormat 3 level List handling

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13994:
URL: https://github.com/apache/flink/pull/13994#issuecomment-723838734


   
   ## CI report:
   
   * 3535a55f3da9d313f350ce9a7eb7c0b2b76c2fa0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9337)
 
   * 80f4c4cbefeedfce494e9f0c08ef5f5281c0bdd8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13989: [FLINK-19448][connector/common] Synchronize fetchers.isEmpty status to SourceReaderBase using elementsQueue.notifyAvailable()

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13989:
URL: https://github.com/apache/flink/pull/13989#issuecomment-723709607


   
   ## CI report:
   
   * 1ae89af64c82b8cb9782dd5975f636a795ca978e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9368)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Issue Comment Deleted] (FLINK-19937) Support sink parallelism option for all connectors

2020-11-09 Thread Lsw_aka_laplace (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lsw_aka_laplace updated FLINK-19937:

Comment: was deleted

(was: [~hackergin] 

Hi jinfeng,

I have similar concern with yours, especially when I applied SINK_PARALLELISM 
to specific connectors. In `PhysicalCommonSink`, for now, we can set the 
parallelism but there is no access to setting the partitioning stretagy. 
Sometimes it can be misleading or may cause some unexpected condition. 

Well, the concern mentioned above seems another theme. From my perspective, 
there are basically two ways.

     1. Just keep it, meanwhile let users know the concern if they want to 
configure the sink parallelism.

     2. Add a new Interface/ add a new method in `ParallelismProvider` called 
`paratitioningStrategy` to give the access so that users can choose their own 
strategy.

 Well, both ways require user‘s knowledgement on DataStream, which seems to be 
the contrary to 'Less Knowledgement'.)

> Support sink parallelism option for all connectors
> --
>
> Key: FLINK-19937
> URL: https://issues.apache.org/jira/browse/FLINK-19937
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Ecosystem
>Reporter: Lsw_aka_laplace
>Priority: Major
>  Labels: pull-request-available
>
> Since https://issues.apache.org/jira/browse/FLINK-19727 has been done.
> SINK_PARALLELISM option and `ParallelismProvider` should be applied for all 
> existing `DynamicTableSink` of connectors in order to give users access to 
> setting their own sink parallelism.
>  
>  
> Update:
> Anybody who works on this issue should refrence to FLINK-19727~
> `ParallelismProvider` should work with `SinkRuntimeProvider`, actually 
> `SinkFunctionProvider` and `OutputFormatProvider`  has implemented 
> `ParallelismProvider`. And `SINK_PARALLELISM`  has already defined in 
> `FactoryUtil`, plz reuse it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20038) Rectify the usage of ResultPartitionType#isPipelined() in partition tracker.

2020-11-09 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228962#comment-17228962
 ] 

Yuan Mei commented on FLINK-20038:
--

Redirect from FLINK-19693:

 

Hey, [~jinxing6...@126.com], these are great points! I have similar 
feelings/considerations when I introduced a new ResultPartitionType 
PIPELINED_APPROXIMATE and the corresponding `reconnectable` attribute in  
FLINK-19632.

Some thoughts here:

Each shuffle mode may require subtly different
 # scheduling strategy
 # failover strategy
 # lifecycle management
 # runtime implementations

To some extent, the above four items are correlated in some way or the other.

For example, PIPELINED_APPROXIMATE is *pipelined* in the sense that downstream 
tasks can start consuming data before upstream tasks finish. However, it is 
*blocking* in the sense that the result partitions are re-connectable (but not 
re-consumable, so strictly speaking, it is not blocking as well). 
PIPELINED_APPROXIMATE's runtime implementation is a bit different from 
pipelined in the sense that it has to handle partial records as handled in 
FLINK-19547. It needs a dedicated failover strategy to only restart failed 
tasks and the existing scheduling strategy can be reused or not (FLINK-20048). 
These are what I mean by *"correlated in some way or the other"*.

Hence, I do not think those four items above can be thought of as completely 
independent of each other. However today, it seems quite difficult to extend 
some of the above (if not all) to link 1-2-3-4 as a whole thing, and this is 
one of the most valuable things I've learned from implementing approximate 
local recovery.

So, my question is: 

Do we have plans to expose more interfaces to ease extension? Here are some 
immature thoughts, which would probably also be useful if we want to support 
channel data stored in DSTL later?
 # User-defined/Configurable Result Partition Type with configurable attributes
 # lifecycle management of different Result Partition Type that can be 
registered to JobMaster
 # User-defined scheduling strategy based on Result Partition Type
 # User-defined scheduling strategy based on Result Partition Type

 

> Rectify the usage of ResultPartitionType#isPipelined() in partition tracker.
> 
>
> Key: FLINK-20038
> URL: https://issues.apache.org/jira/browse/FLINK-20038
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination, Runtime / Network
>Reporter: Jin Xing
>Priority: Major
>
> After "FLIP-31: Pluggable Shuffle Service", users can extend and plug in new 
> shuffle manner, thus to benefit different scenarios. New shuffle manner tend 
> to bring in new abilities which could be leveraged by scheduling layer to 
> provide better performance.
> From my understanding, the characteristics of shuffle manner is exposed by 
> ResultPartitionType (e.g. isPipelined, isBlocking, hasBackPressure ...), and 
> leveraged by scheduling layer to conduct job. But seems that Flink doesn't 
> provide a way to describe the new characteristics from a plugged in shuffle 
> manner. I also find that scheduling layer have some weak assumptions on 
> ResultPartitionType. I detail by below example.
> In our internal Flink, we develop a new shuffle manner for batch jobs. 
> Characteristics can be summarized as below briefly:
> 1. Upstream task shuffle writes data to DISK;
> 2. Upstream task commits data while producing and notify "consumable" to 
> downstream BEFORE task finished;
> 3. Downstream is notified when upstream data is consumable and can be 
> scheduled according to resource;
> 4. When downstream task failover, only itself needs to be restarted because 
> upstream data is written into disk and replayable;
> We can tell the character of this new shuffle manner as:
> a. isPipelined=true – downstream task can consume data before upstream 
> finished;
> b. hasBackPressure=false – upstream task shuffle writes data to disk and can 
> finish by itself no matter if there's downstream task consumes the data in 
> time.
> But above new ResultPartitionType(isPipelined=true, hasBackPressure=false) 
> seems contradicts the partition lifecycle management in current scheduling 
> layer:
> 1. The above new shuffle manner needs partition tracker for lifecycle 
> management, but current Flink assumes that ALL "isPipelined=true" result 
> partitions are released on consumption and will not be taken care of by 
> partition tracker 
> ([link|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/JobMasterPartitionTrackerImpl.java#L66])
>  – the limitation is not correct for this case.
> From my understanding, the method of ResultPartitionType#isPipelined() 
> indicates whether data can be 

[GitHub] [flink] flinkbot commented on pull request #14007: [FLINK-19945][Connectors / FileSystem]Support sink parallelism config…

2020-11-09 Thread GitBox


flinkbot commented on pull request #14007:
URL: https://github.com/apache/flink/pull/14007#issuecomment-724437781


   
   ## CI report:
   
   * 3229475639c0140db85f415a0864b8e5006cad1b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14006: [FLINK-19546][doc] Add documentation for native Kubernetes HA

2020-11-09 Thread GitBox


flinkbot commented on pull request #14006:
URL: https://github.com/apache/flink/pull/14006#issuecomment-724437611


   
   ## CI report:
   
   * 1ee231fa8559cdc63ec4956262da92496698826a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13999: [FLINK-19436][tests] Properly shutdown cluster in e2e tests

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13999:
URL: https://github.com/apache/flink/pull/13999#issuecomment-724041815


   
   ## CI report:
   
   * 4ba1990236a250faa7681147f217832e499e3d95 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9355)
 
   * 3407a109e949dd38a5affc38f8adedf81a2578fc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19436) TPC-DS end-to-end test (Blink planner) failed during shutdown

2020-11-09 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228960#comment-17228960
 ] 

Leonard Xu commented on FLINK-19436:


And in these failed tests, all failed TM pid(e.g. *4267*) can be found in 
previous e2e log,

[PASS] 'Netty shuffle direct memory consumption end-to-end test' passed after 2 
minutes and 23 seconds! Test exited with exit code 0.
2020-10-18T22:26:05.8214649Z Stopping taskexecutor daemon (pid: 5306) on host 
fv-az679.
2020-10-18T22:26:06.0204996Z Stopping standalonesession daemon (pid: 3673) on 
host fv-az679.
 
2020-10-18T22:26:06.6608513Z Jps
2020-10-18T22:26:06.7766551Z *{color:#de350b}4627{color}* TaskManagerRunner
2020-10-18T22:26:06.7911919Z 3962 TaskManagerRunner
2020-10-18T22:26:06.8175144Z 6875 Jps
2020-10-18T22:26:06.8219362Z Disk information

2020-10-18T22:58:21.1330389Z [INFO] Validation succeeded for file: 99.ans 
(103/103)
2020-10-18T22:58:21.4205488Z Stopping taskexecutor daemon (pid: 5407) on host 
fv-az679.
2020-10-18T22:58:21.6132761Z Stopping standalonesession daemon (pid: 5120) on 
host fv-az679.

2020-10-18T22:58:21.9654801Z Stopping taskexecutor daemon (pid: 4627) on host 
fv-az679.
2020-10-18T22:58:21.9670548Z 
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test-runner-common.sh: 
line 47: *{color:#de350b}4627{color}* Terminated ${command}
2020-10-18T22:58:21.9671129Z [FAIL] Test script contains errors.

 

I tend  to stop all TMs in common.shutdown_all before `kill -9` them, How do 
you think? [~rmetzger]

> TPC-DS end-to-end test (Blink planner) failed during shutdown
> -
>
> Key: FLINK-19436
> URL: https://issues.apache.org/jira/browse/FLINK-19436
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Dian Fu
>Assignee: Leonard Xu
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
> Attachments: image-2020-11-10-11-08-53-199.png, 
> image-2020-11-10-11-09-20-534.png
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7009=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=2b7514ee-e706-5046-657b-3430666e7bd9
> {code}
> 2020-09-27T22:37:53.2236467Z Stopping taskexecutor daemon (pid: 2992) on host 
> fv-az655.
> 2020-09-27T22:37:53.4450715Z Stopping standalonesession daemon (pid: 2699) on 
> host fv-az655.
> 2020-09-27T22:37:53.8014537Z Skipping taskexecutor daemon (pid: 11173), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8019740Z Skipping taskexecutor daemon (pid: 11561), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8022857Z Skipping taskexecutor daemon (pid: 11849), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8023616Z Skipping taskexecutor daemon (pid: 12180), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8024327Z Skipping taskexecutor daemon (pid: 12950), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8025027Z Skipping taskexecutor daemon (pid: 13472), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8025727Z Skipping taskexecutor daemon (pid: 16577), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8026417Z Skipping taskexecutor daemon (pid: 16959), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8027086Z Skipping taskexecutor daemon (pid: 17250), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8027770Z Skipping taskexecutor daemon (pid: 17601), 
> because it is not running anymore on fv-az655.
> 2020-09-27T22:37:53.8028400Z Stopping taskexecutor daemon (pid: 18438) on 
> host fv-az655.
> 2020-09-27T22:37:53.8029314Z 
> /home/vsts/work/1/s/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT/bin/taskmanager.sh:
>  line 99: 18438 Terminated  "${FLINK_BIN_DIR}"/flink-daemon.sh 
> $STARTSTOP $ENTRYPOINT "${ARGS[@]}"
> 2020-09-27T22:37:53.8029895Z [FAIL] Test script contains errors.
> 2020-09-27T22:37:53.8032092Z Checking for errors...
> 2020-09-27T22:37:55.3713368Z No errors in log files.
> 2020-09-27T22:37:55.3713935Z Checking for exceptions...
> 2020-09-27T22:37:56.9046391Z No exceptions in log files.
> 2020-09-27T22:37:56.9047333Z Checking for non-empty .out files...
> 2020-09-27T22:37:56.9064402Z No non-empty .out files.
> 2020-09-27T22:37:56.9064859Z 
> 2020-09-27T22:37:56.9065588Z [FAIL] 'TPC-DS end-to-end test (Blink planner)' 
> failed after 16 minutes and 54 seconds! Test exited with exit code 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-12751) Create file based HA support

2020-11-09 Thread Slim Bouguerra (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228959#comment-17228959
 ] 

Slim Bouguerra commented on FLINK-12751:


Hi guys, is there anything blocking this Jira/PR We are making some design 
decision and would love base our decision on some reasonable ETAs.

In our case this HA Story will work out of the box since we are using an NFS 
mount.

[~borisl] did you run this in prod ? can you please share your 
thoughts/experience when deploying this ?

Thanks.

> Create file based HA support
> 
>
> Key: FLINK-12751
> URL: https://issues.apache.org/jira/browse/FLINK-12751
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / Coordination
>Affects Versions: 1.8.0, 1.9.0, 2.0.0
> Environment: Flink on k8 and Mini cluster
>Reporter: Boris Lublinsky
>Priority: Major
>  Labels: features, pull-request-available
>   Original Estimate: 168h
>  Time Spent: 10m
>  Remaining Estimate: 167h 50m
>
> In the current Flink implementation, HA support can be implemented either 
> using Zookeeper or Custom Factory class.
> Add HA implementation based on PVC. The idea behind this implementation
> is as follows:
> * Because implementation assumes a single instance of Job manager (Job 
> manager selection and restarts are done by K8 Deployment of 1)
> URL management is done using StandaloneHaServices implementation (in the case 
> of cluster) and EmbeddedHaServices implementation (in the case of mini 
> cluster)
> * For management of the submitted Job Graphs, checkpoint counter and 
> completed checkpoint an implementation is leveraging the following file 
> system layout
> {code}
>  ha -> root of the HA data
>  checkpointcounter -> checkpoint counter folder
>   -> job id folder
>   -> counter file
>   -> another job id folder
>  ...
>  completedCheckpoint -> completed checkpoint folder
>   -> job id folder
>   -> checkpoint file
>   -> checkpoint file
>  ...
>   -> another job id folder
>  ...
>  submittedJobGraph -> submitted graph folder
>   -> job id folder
>   -> graph file
>   -> another job id folder
>  ...
> {code}
> An implementation should overwrites 2 of the Flink files:
> * HighAvailabilityServicesUtils - added `FILESYSTEM` option for picking HA 
> service
> * HighAvailabilityMode - added `FILESYSTEM` to available HA options.
> The actual implementation adds the following classes:
> * `FileSystemHAServices` - an implementation of a `HighAvailabilityServices` 
> for file system
> * `FileSystemUtils` - support class for creation of runtime components.
> * `FileSystemStorageHelper` - file system operations implementation for 
> filesystem based HA
> * `FileSystemCheckpointRecoveryFactory` - an implementation of a 
> `CheckpointRecoveryFactory`for file system
> * `FileSystemCheckpointIDCounter` - an implementation of a 
> `CheckpointIDCounter` for file system
> * `FileSystemCompletedCheckpointStore` - an implementation of a 
> `CompletedCheckpointStore` for file system
> * `FileSystemSubmittedJobGraphStore` - an implementation of a 
> `SubmittedJobGraphStore` for file system



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19253) SourceReaderTestBase.testAddSplitToExistingFetcher hangs

2020-11-09 Thread Xuannan Su (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228958#comment-17228958
 ] 

Xuannan Su commented on FLINK-19253:


I think it is a potential race condition in SplitFetcher.

It occurs when one thread executing the SplitFetcher#checkAndSetIdle method and 
the other thread adding split to the SplitFetcher with SplitFetcher#addSplits. 
i.e., the checkAndSetIdle first check if it should go to idle and set the 
isIdle flag. Between these two steps, another thread could call the addSplits, 
which put a new task into the taskQueue and set the isIdle flag to false. Then, 
the first thread set the isIdle flag to true.

We need to sychronize the thread that modifying the isIdle flag.

cc [~becket_qin]


> SourceReaderTestBase.testAddSplitToExistingFetcher hangs
> 
>
> Key: FLINK-19253
> URL: https://issues.apache.org/jira/browse/FLINK-19253
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=6521=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code}
> 2020-09-15T10:51:35.5236837Z "SourceFetcher" #39 prio=5 os_prio=0 
> tid=0x7f70d0a57000 nid=0x858 in Object.wait() [0x7f6fd81f]
> 2020-09-15T10:51:35.5237447Zjava.lang.Thread.State: WAITING (on object 
> monitor)
> 2020-09-15T10:51:35.5237962Z  at java.lang.Object.wait(Native Method)
> 2020-09-15T10:51:35.5238886Z  - waiting on <0xc27f5be8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5239380Z  at java.lang.Object.wait(Object.java:502)
> 2020-09-15T10:51:35.5240401Z  at 
> org.apache.flink.connector.base.source.reader.mocks.TestingSplitReader.fetch(TestingSplitReader.java:52)
> 2020-09-15T10:51:35.5241471Z  - locked <0xc27f5be8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5242180Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
> 2020-09-15T10:51:35.5243245Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:128)
> 2020-09-15T10:51:35.5244263Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:95)
> 2020-09-15T10:51:35.5245128Z  at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:42)
> 2020-09-15T10:51:35.5245973Z  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2020-09-15T10:51:35.5247081Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-09-15T10:51:35.5247816Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 2020-09-15T10:51:35.5248809Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-09-15T10:51:35.5249463Z  at java.lang.Thread.run(Thread.java:748)
> 2020-09-15T10:51:35.5249827Z 
> 2020-09-15T10:51:35.5250383Z "SourceFetcher" #37 prio=5 os_prio=0 
> tid=0x7f70d0a4b000 nid=0x856 in Object.wait() [0x7f6f80cfa000]
> 2020-09-15T10:51:35.5251124Zjava.lang.Thread.State: WAITING (on object 
> monitor)
> 2020-09-15T10:51:35.5251636Z  at java.lang.Object.wait(Native Method)
> 2020-09-15T10:51:35.5252767Z  - waiting on <0xc298d0b8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5253336Z  at java.lang.Object.wait(Object.java:502)
> 2020-09-15T10:51:35.5254184Z  at 
> org.apache.flink.connector.base.source.reader.mocks.TestingSplitReader.fetch(TestingSplitReader.java:52)
> 2020-09-15T10:51:35.5255220Z  - locked <0xc298d0b8> (a 
> java.util.ArrayDeque)
> 2020-09-15T10:51:35.5255678Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.FetchTask.run(FetchTask.java:58)
> 2020-09-15T10:51:35.5256235Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.runOnce(SplitFetcher.java:128)
> 2020-09-15T10:51:35.5256803Z  at 
> org.apache.flink.connector.base.source.reader.fetcher.SplitFetcher.run(SplitFetcher.java:95)
> 2020-09-15T10:51:35.5257351Z  at 
> org.apache.flink.util.ThrowableCatchingRunnable.run(ThrowableCatchingRunnable.java:42)
> 2020-09-15T10:51:35.5257838Z  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> 2020-09-15T10:51:35.5258284Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-09-15T10:51:35.5258856Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 2020-09-15T10:51:35.5259350Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 2020-09-15T10:51:35.5260011Z  at java.lang.Thread.run(Thread.java:748)
> 2020-09-15T10:51:35.5260211Z 
> 

[GitHub] [flink] wangyang0918 commented on pull request #14006: [FLINK-19546][doc] Add documentation for native Kubernetes HA

2020-11-09 Thread GitBox


wangyang0918 commented on pull request #14006:
URL: https://github.com/apache/flink/pull/14006#issuecomment-724434807


   cc @tillrohrmann Could you please have a look on the documentation?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14007: [FLINK-19945][Connectors / FileSystem]Support sink parallelism config…

2020-11-09 Thread GitBox


flinkbot commented on pull request #14007:
URL: https://github.com/apache/flink/pull/14007#issuecomment-724434148


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3229475639c0140db85f415a0864b8e5006cad1b (Tue Nov 10 
03:57:41 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-19545) Add e2e test for native Kubernetes HA

2020-11-09 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228907#comment-17228907
 ] 

Yang Wang edited comment on FLINK-19545 at 11/10/20, 3:56 AM:
--

Actually, the above IT cases does not contain a E2E test, including Flink CLI 
job submission, kill the JobManager and check whether the Flink job could 
recover from latest checkpoint successfully. It is really a basic Kubernetes HA 
behavior test and could help us to keep it is always not broken.

 

For the jepsen tests, I am not aware of this module before and will learn more 
about it. I think it makes sense to me to let it also work on Kubernetes.


was (Author: fly_in_gis):
Actually, the above IT cases does not contain a E2E test, including Flink CLI 
job submission, kill the JobManager and check whether the Flink job could 
recover from latest checkpoint successfully. It is really a basic Kubernetes HA 
behavior test and could help us to keep it is always not broken.

 

For the jepsen tests, I am not aware of this project before and will learn more 
about it. I think it makes sense to me to let it also work on Kubernetes.

> Add e2e test for native Kubernetes HA
> -
>
> Key: FLINK-19545
> URL: https://issues.apache.org/jira/browse/FLINK-19545
> Project: Flink
>  Issue Type: Sub-task
>  Components: Tests
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
> Fix For: 1.12.0
>
>
> We could use minikube for the E2E tests. Start a Flink session/application 
> cluster on K8s, kill one TaskManager pod or JobManager Pod and wait for the 
> job recovered from the latest checkpoint successfully.
> {panel}
> {panel}
> |{{kubectl }}{{exec}} {{-it \{pod_name} -- }}{{/bin/sh}} {{-c }}{{"kill 1"}}|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19945) Support sink parallelism configuration to FileSystem connector

2020-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19945:
---
Labels: pull-request-available  (was: )

> Support sink parallelism configuration to FileSystem connector
> --
>
> Key: FLINK-19945
> URL: https://issues.apache.org/jira/browse/FLINK-19945
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Reporter: CloseRiver
>Assignee: Lsw_aka_laplace
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] shouweikun opened a new pull request #14007: [FLINK-19945][Connectors / FileSystem]Support sink parallelism config…

2020-11-09 Thread GitBox


shouweikun opened a new pull request #14007:
URL: https://github.com/apache/flink/pull/14007


   …uration to FileSystem connector
   
   ## What is the purpose of the change
   
   give user access to configuring the sink parallelism of FileSystem.
   
   
   ## Brief change log
   
   -  add SINK_PARALLELISM into optionalOptions of `FileSystemTableFactory`
   -  configure parallelism of writing file transformation both in streaming 
and batch in `FileSystemTableSink`
   -  update docs, introducing sink.parallelism
   
   ## Verifying this change
   This change added tests and can be verified as follows:
   - `FileSystemTableSinkTest` ensure the configured parallelism is applied
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): ( no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? ( docs)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouweikun commented on pull request #13981: [FLINK-19937][Connectors / FileSystem]Support sink parallelism config…

2020-11-09 Thread GitBox


shouweikun commented on pull request #13981:
URL: https://github.com/apache/flink/pull/13981#issuecomment-724433200


   A new pr is pushed



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] shouweikun closed pull request #13981: [FLINK-19937][Connectors / FileSystem]Support sink parallelism config…

2020-11-09 Thread GitBox


shouweikun closed pull request #13981:
URL: https://github.com/apache/flink/pull/13981


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #14006: [FLINK-19546][doc] Add documentation for native Kubernetes HA

2020-11-09 Thread GitBox


flinkbot commented on pull request #14006:
URL: https://github.com/apache/flink/pull/14006#issuecomment-724432808


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1ee231fa8559cdc63ec4956262da92496698826a (Tue Nov 10 
03:52:18 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-19910) Table API & SQL Data Types Document error

2020-11-09 Thread ZiHaoDeng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228946#comment-17228946
 ] 

ZiHaoDeng edited comment on FLINK-19910 at 11/10/20, 3:51 AM:
--

Now, I use the '\{{INTERVAL DAY', }}When the ‘P1’ precision is greater than 3, 
then throw exception, Because it's precision only allowed <= 3 in source code.

Dear [~twalthr] ,Please check the issue and patch again, And give me the 
suggestion. Thanks.


was (Author: pezynd):
Now, I use the '{{INTERVAL DAY', }}When the ‘P1’ parameter value is greater 
than 3, then throw exception, Because it's only allowed <= 3 in source code.

Dear [~twalthr] ,Please check the issue and patch again, And give me the 
suggestion. Thanks.

> Table API & SQL Data Types Document error
> -
>
> Key: FLINK-19910
> URL: https://issues.apache.org/jira/browse/FLINK-19910
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.12.0, 1.11.1
>Reporter: ZiHaoDeng
>Priority: Major
> Attachments: image-2020-11-01-16-52-07-420.png, 
> image-2020-11-01-16-54-30-989.png
>
>
> source code
> !image-2020-11-01-16-52-07-420.png!
> but the document is wrong
> !image-2020-11-01-16-54-30-989.png!
> url:[https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/types.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-14482) Bump up rocksdb version

2020-11-09 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-14482:
--
Fix Version/s: (was: 1.12.0)
   1.13.0

> Bump up rocksdb version
> ---
>
> Key: FLINK-14482
> URL: https://issues.apache.org/jira/browse/FLINK-14482
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> Current rocksDB-5.17.2 does not support write buffer manager well, we need to 
> bump rocksdb version to support that feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19710) Fix performance regression to rebase FRocksDB with higher version RocksDB

2020-11-09 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-19710:
--
Fix Version/s: (was: 1.12.0)
   1.13.0

> Fix performance regression to rebase FRocksDB with higher version RocksDB
> -
>
> Key: FLINK-19710
> URL: https://issues.apache.org/jira/browse/FLINK-19710
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
> Fix For: 1.13.0
>
>
> We planed to bump base rocksDB version from 5.17.2 to 6.11.x. However, we 
> observed performance regression compared with 5.17.2 and 5.18.3 via our own 
> flink-benchmarks, and reported to RocksDB community in 
> [rocksdb#5774|https://github.com/facebook/rocksdb/issues/5774]. Since 
> rocksDB-5.18.3 is a bit old for RocksDB community, and rocksDB built-in 
> db_bench tool cannot easily reproduce this regression, we did not get any 
> efficient help from RocksDB community.
> Since code freeze of Flink-release-1.12 is close, we have to figure it out by 
> ourself. We try to use rocksDB built-in db_bench tool first to binary 
> searching the 160 different commits between rocksDB 5.17.2 and 5.18.3. 
> However, the performance regression is not so clear. And after using our own 
> flink-benchmarks. We finally detect the commit which introduced the 
> nearly-10% performance regression: [replaced __thread with thread_local 
> keyword 
> |https://github.com/facebook/rocksdb/commit/d6ec288703c8fc53b54be9e3e3f3ffd6a7487c63]
>  .
> From existing knowledge, the performance regression of {{thread-local}} is 
> known from [gcc-4.8 changes|https://gcc.gnu.org/gcc-4.8/changes.html#cxx] and 
> become more serious in [dynamic modules usage 
> |http://david-grs.github.io/tls_performance_overhead_cost_linux/] [[tls 
> benchmark|https://testbit.eu/2015/thread-local-storage-benchmark]]]. That 
> could explain why rocksDB built-in db_bench tool cannot reproduce this 
> regression as it is complied in static mode by recommendation.
>  
> We plan to fix this in our FRocksDB branch first to revert related changes. 
> And from my current local experimental result, that revert proved to be 
> effective to avoid that performance regression.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-19546) Add documentation for native Kubernetes HA

2020-11-09 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19546:
---
Labels: pull-request-available  (was: )

> Add documentation for native Kubernetes HA
> --
>
> Key: FLINK-19546
> URL: https://issues.apache.org/jira/browse/FLINK-19546
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation
>Reporter: Yang Wang
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 opened a new pull request #14006: [FLINK-19546][doc] Add documentation for native Kubernetes HA

2020-11-09 Thread GitBox


wangyang0918 opened a new pull request #14006:
URL: https://github.com/apache/flink/pull/14006


   Add documentation for FLIP-144(native Kubernetes HA).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13750: [FLINK-19394][docs-zh] Translate the 'Monitoring Checkpointing' page of 'Debugging & Monitoring' into Chinese

2020-11-09 Thread GitBox


flinkbot edited a comment on pull request #13750:
URL: https://github.com/apache/flink/pull/13750#issuecomment-714567487


   
   ## CI report:
   
   * 275378338d8c0968a73489764de84a0cb2096e2b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=8502)
 
   * 930c16e764a8e3c1ce6352ff7389cffd5c88485b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9373)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19937) Support sink parallelism option for all connectors

2020-11-09 Thread Lsw_aka_laplace (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228952#comment-17228952
 ] 

Lsw_aka_laplace commented on FLINK-19937:
-

[~hackergin] 

Hi jinfeng,

I have similar concern with yours, especially when I applied SINK_PARALLELISM 
to specific connectors. In `PhysicalCommonSink`, for now, we can set the 
parallelism but there is no access to setting the partitioning stretagy. 
Sometimes it can be misleading or may cause some unexpected condition. 

Well, the concern mentioned above seems another theme. From my perspective, 
there are basically two ways.

     1. Just keep it, meanwhile let users know the concern if they want to 
configure the sink parallelism.

     2. Add a new Interface/ add a new method in `ParallelismProvider` called 
`paratitioningStrategy` to give the access so that users can choose their own 
strategy.

 Well, both ways require user‘s knowledgement on DataStream, which seems to be 
the contrary to 'Less Knowledgement'.

> Support sink parallelism option for all connectors
> --
>
> Key: FLINK-19937
> URL: https://issues.apache.org/jira/browse/FLINK-19937
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Ecosystem
>Reporter: Lsw_aka_laplace
>Priority: Major
>  Labels: pull-request-available
>
> Since https://issues.apache.org/jira/browse/FLINK-19727 has been done.
> SINK_PARALLELISM option and `ParallelismProvider` should be applied for all 
> existing `DynamicTableSink` of connectors in order to give users access to 
> setting their own sink parallelism.
>  
>  
> Update:
> Anybody who works on this issue should refrence to FLINK-19727~
> `ParallelismProvider` should work with `SinkRuntimeProvider`, actually 
> `SinkFunctionProvider` and `OutputFormatProvider`  has implemented 
> `ParallelismProvider`. And `SINK_PARALLELISM`  has already defined in 
> `FactoryUtil`, plz reuse it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   >