[jira] [Updated] (FLINK-17307) Add collector to DeserializationSchema

2020-11-08 Thread Dawid Wysakowicz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Wysakowicz updated FLINK-17307:
-
Description: 
Additionally add support in connectors:
* Kafka
-* Kinesis-
* PubSub
* RabbitMq

  was:
Additionally add support in connectors:
* Kafka
* Kinesis
* PubSub
* RabbitMq


> Add collector to DeserializationSchema
> --
>
> Key: FLINK-17307
> URL: https://issues.apache.org/jira/browse/FLINK-17307
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Additionally add support in connectors:
> * Kafka
> -* Kinesis-
> * PubSub
> * RabbitMq



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20055) Datadog API Key exposed in Flink JobManager logs

2020-11-08 Thread Florian Szabo (Jira)
Florian Szabo created FLINK-20055:
-

 Summary: Datadog API Key exposed in Flink JobManager logs
 Key: FLINK-20055
 URL: https://issues.apache.org/jira/browse/FLINK-20055
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / Configuration
Affects Versions: 1.11.2, 1.9.1
Reporter: Florian Szabo


When Flink is set up to report metrics to Datadog, the JobManager log containe 
the Datadog API key in plain format. In fact it shows up in two different 
places:
{code:java}
2020-08-03 09:03:19,400 INFO  
org.apache.flink.configuration.GlobalConfiguration- Loading 
configuration property: metrics.reporter.dghttp.apikey, 
...
2020-08-03 09:03:20,437 INFO  org.apache.flink.runtime.metrics.ReporterSetup
- Configuring dghttp with {apikey=, 
tags=<...>,profile:<...>,region:<...>,env:<...>, 
class=org.apache.flink.metrics.datadog.DatadogHttpReporter}.
{code}
The expected behavior here should be that the API key in both places is hidden 
so that it does not end up in places where it should not be.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-20054) 3 Level List is not supported in ParquetInputFormat

2020-11-08 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-20054:
--

Assignee: Zhenqiu Huang

> 3 Level List is not supported in ParquetInputFormat
> ---
>
> Key: FLINK-20054
> URL: https://issues.apache.org/jira/browse/FLINK-20054
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.2
>Reporter: Zhenqiu Huang
>Assignee: Zhenqiu Huang
>Priority: Minor
>  Labels: pull-request-available
>
> In flink-parquet module doesn't support
> reading a 3-level list type in parquet though it is able to process a
> 2-level list type.
> 3-level
> optional group my_list (LIST) {
>   repeated group element {
> required binary str (UTF8);
>   };
> }
>  2-level
> optional group my_list (LIST) {
>   repeated int32 element;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20013) BoundedBlockingSubpartition may leak network buffer if task is failed or canceled

2020-11-08 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20013?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228387#comment-17228387
 ] 

Zhu Zhu commented on FLINK-20013:
-

Hi [~nicholasjiang], I can see that [~pnowojski] has assigned this ticket to 
you.
Feel free to open a fix for it.

> BoundedBlockingSubpartition may leak network buffer if task is failed or 
> canceled
> -
>
> Key: FLINK-20013
> URL: https://issues.apache.org/jira/browse/FLINK-20013
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Reporter: Yingjie Cao
>Assignee: Nicholas Jiang
>Priority: Major
> Fix For: 1.12.0
>
>
> BoundedBlockingSubpartition may leak network buffer if task is failed or 
> canceled. We need to recycle the current BufferConsumer when task is failed 
> or canceled.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangZhenQiu opened a new pull request #13994: [FLINK-20054][formats] Fix ParquetInputFormat 3 level List handling

2020-11-08 Thread GitBox


HuangZhenQiu opened a new pull request #13994:
URL: https://github.com/apache/flink/pull/13994


   # What is the purpose of the change
   Fix the 3 level List handling in ParquetInputFormat
   
   ## Brief change log
   1) Add the 3 level List schema conversion logic in ParquetSchemaConverter.
   2) Update the ParquetRowInputFormatTest to generate 3 level List for end to 
end test.
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is already covered by existing tests, such as 
ParquetRowInputFormatTest.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20054) 3 Level List is not supported in ParquetInputFormat

2020-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20054:
---
Labels: pull-request-available  (was: )

> 3 Level List is not supported in ParquetInputFormat
> ---
>
> Key: FLINK-20054
> URL: https://issues.apache.org/jira/browse/FLINK-20054
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.2
>Reporter: Zhenqiu Huang
>Priority: Minor
>  Labels: pull-request-available
>
> In flink-parquet module doesn't support
> reading a 3-level list type in parquet though it is able to process a
> 2-level list type.
> 3-level
> optional group my_list (LIST) {
>   repeated group element {
> required binary str (UTF8);
>   };
> }
>  2-level
> optional group my_list (LIST) {
>   repeated int32 element;
> }



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13989: [FLINK-19448][connector/common] Synchronize fetchers.isEmpty status to SourceReaderBase using elementsQueue.notifyAvailable()

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13989:
URL: https://github.com/apache/flink/pull/13989#issuecomment-723709607


   
   ## CI report:
   
   * 7769826c671c1c7e081128ab1a5cf74819808456 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9330)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13972: [FLINK-19912][json] Fix JSON format fails to serialize map value with…

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13972:
URL: https://github.com/apache/flink/pull/13972#issuecomment-723244452


   
   ## CI report:
   
   * 18bd1620f55d1363b4d3173dc2e9c14e83ba859b UNKNOWN
   * 951191135eee7d3f7fddea2a5e296dbd36716dd3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9312)
 
   * f7b60d2413a5a86136b1d00d319dca668787559d UNKNOWN
   * b2dcd73cb70c5a3a1628aeff8dc1fb8e05d6fd53 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9335)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13993: [hotfix][hbase] Add missed ITCase in hbase2 connector

2020-11-08 Thread GitBox


flinkbot commented on pull request #13993:
URL: https://github.com/apache/flink/pull/13993#issuecomment-723825860


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit ec0ef0033a2910e7f0b8c7560740627fc28c7586 (Mon Nov 09 
07:37:16 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-19964) Gelly ITCase stuck on Azure in HITSITCase.testPrintWithRMatGraph

2020-11-08 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228381#comment-17228381
 ] 

Zhu Zhu edited comment on FLINK-19964 at 11/9/20, 7:37 AM:
---

The change FLINK-19189 to "enable pipelined region scheduling by default" has 
been merged since 09/24, which is even older.
So possibly there are also other causes, which together result in this problem.


was (Author: zhuzh):
The change FLINK-19189 to "enable pipelined region scheduling by default" has 
been merged since 09/24, which is even older.
So there may be another cause, or possibly there are multiple causes.

> Gelly ITCase stuck on Azure in HITSITCase.testPrintWithRMatGraph
> 
>
> Key: FLINK-19964
> URL: https://issues.apache.org/jira/browse/FLINK-19964
> Project: Flink
>  Issue Type: Bug
>  Components: Library / Graph Processing (Gelly), Runtime / Network, 
> Tests
>Affects Versions: 1.12.0
>Reporter: Chesnay Schepler
>Assignee: Roman Khachatryan
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> The HITSITCase has gotten stuck on Azure. Chances are that something in the 
> scheduling or network has broken it.
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8919=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] leonardBang opened a new pull request #13993: [hotfix][hbase] Add missed ITCase in hbase2 connector

2020-11-08 Thread GitBox


leonardBang opened a new pull request #13993:
URL: https://github.com/apache/flink/pull/13993


   ## What is the purpose of the change
   
   * This pull request  add the missed `@Test` in hbase2 connector, I think 
this test is missed in FLINK-18795
   
   
   ## Brief change log
   
 - add the missed `@Test`
 - fix typo
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19882) E2E: SQLClientHBaseITCase crash

2020-11-08 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228382#comment-17228382
 ] 

Robert Metzger commented on FLINK-19882:


Okay, the problem seems to be the following:

1. the test process gets killed by the timeout watchdog (it's just a 
coincidence by the OS pid allocation)
2. the timeout watchdog is still running because the "Local recovery and sticky 
scheduling end-to-end test" is not properly exiting
3. It is unclear why the "Local recovery and sticky scheduling end-to-end test" 
is not properly exiting. This is its output:
{code}
2020-11-09T00:48:38.6400606Z Nov 09 00:48:38 Starting zookeeper daemon on host 
fv-az668-576.
2020-11-09T00:48:38.7987804Z Nov 09 00:48:38 Starting HA cluster with 1 masters.
2020-11-09T00:48:39.9627643Z Nov 09 00:48:39 Starting standalonesession daemon 
on host fv-az668-576.
2020-11-09T00:48:41.3899696Z Nov 09 00:48:41 Starting taskexecutor daemon on 
host fv-az668-576.
2020-11-09T00:48:41.4279555Z Nov 09 00:48:41 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:42.4789201Z Nov 09 00:48:42 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:43.6305190Z Nov 09 00:48:43 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:44.6585379Z Nov 09 00:48:44 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:45.6980086Z Nov 09 00:48:45 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:46.7181937Z Nov 09 00:48:46 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:47.7353049Z Nov 09 00:48:47 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:48.7532273Z Nov 09 00:48:48 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:49.7689345Z Nov 09 00:48:49 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:50.7851585Z Nov 09 00:48:50 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:51.8031040Z Nov 09 00:48:51 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:52.8264372Z Nov 09 00:48:52 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:53.8605046Z Nov 09 00:48:53 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:54.8848538Z Nov 09 00:48:54 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:55.9036072Z Nov 09 00:48:55 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:56.9203992Z Nov 09 00:48:56 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:57.9366759Z Nov 09 00:48:57 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:58.9564157Z Nov 09 00:48:58 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:48:59.9745170Z Nov 09 00:48:59 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:00.9914037Z Nov 09 00:49:00 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:02.0076906Z Nov 09 00:49:02 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:03.0240580Z Nov 09 00:49:03 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:04.0426484Z Nov 09 00:49:04 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:05.0598331Z Nov 09 00:49:05 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:06.0774841Z Nov 09 00:49:06 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:07.0934791Z Nov 09 00:49:07 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:08.1095532Z Nov 09 00:49:08 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:09.1217046Z Nov 09 00:49:09 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:10.1384550Z Nov 09 00:49:10 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:11.160Z Nov 09 00:49:11 Waiting for Dispatcher REST 
endpoint to come up...
2020-11-09T00:49:12.1573132Z Nov 09 00:49:12 Dispatcher REST endpoint has not 
started within a timeout of 30 sec
2020-11-09T00:49:12.1596545Z Nov 09 00:49:12 Checking of logs skipped.
2020-11-09T00:49:12.1597438Z Nov 09 00:49:12 
2020-11-09T00:49:12.1601475Z Nov 09 00:49:12 [PASS] 'Local recovery and sticky 
scheduling end-to-end test' passed after 0 minutes and 34 seconds! Test exited 
with exit code 0.
{code}

> E2E: SQLClientHBaseITCase crash
> ---
>
> Key: FLINK-19882
> URL: https://issues.apache.org/jira/browse/FLINK-19882
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / HBase
>Reporter: Jingsong Lee
>Assignee: Robert Metzger
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> INSTANCE: 
> [https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/8563/logs/141]
> {code:java}
> 2020-10-29T09:43:24.0088180Z [ERROR] 

[jira] [Updated] (FLINK-19882) E2E: SQLClientHBaseITCase crash

2020-11-08 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-19882:
---
Affects Version/s: 1.12.0

> E2E: SQLClientHBaseITCase crash
> ---
>
> Key: FLINK-19882
> URL: https://issues.apache.org/jira/browse/FLINK-19882
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Jingsong Lee
>Assignee: Robert Metzger
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> INSTANCE: 
> [https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/8563/logs/141]
> {code:java}
> 2020-10-29T09:43:24.0088180Z [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-end-to-end-tests-hbase: There are test failures.
> 2020-10-29T09:43:24.0088792Z [ERROR] 
> 2020-10-29T09:43:24.0089518Z [ERROR] Please refer to 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire-reports
>  for the individual test results.
> 2020-10-29T09:43:24.0090427Z [ERROR] Please refer to dump files (if any 
> exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2020-10-29T09:43:24.0090914Z [ERROR] The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> 2020-10-29T09:43:24.0093105Z [ERROR] Command was /bin/sh -c cd 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target
>  && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire/surefirebooter6795869883612750001.jar
>  
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire
>  2020-10-29T09-34-47_778-jvmRun2 surefire2269050977160717631tmp 
> surefire_67897497331523564186tmp
> 2020-10-29T09:43:24.0094488Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-10-29T09:43:24.0094797Z [ERROR] Process Exit Code: 143
> 2020-10-29T09:43:24.0095033Z [ERROR] Crashed tests:
> 2020-10-29T09:43:24.0095321Z [ERROR] 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-10-29T09:43:24.0095828Z [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM 
> terminated without properly saying goodbye. VM crash or System.exit called?
> 2020-10-29T09:43:24.0097838Z [ERROR] Command was /bin/sh -c cd 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target
>  && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire/surefirebooter6795869883612750001.jar
>  
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire
>  2020-10-29T09-34-47_778-jvmRun2 surefire2269050977160717631tmp 
> surefire_67897497331523564186tmp
> 2020-10-29T09:43:24.0098966Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-10-29T09:43:24.0099266Z [ERROR] Process Exit Code: 143
> 2020-10-29T09:43:24.0099502Z [ERROR] Crashed tests:
> 2020-10-29T09:43:24.0099789Z [ERROR] 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-10-29T09:43:24.0100331Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:669)
> 2020-10-29T09:43:24.0100883Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:282)
> 2020-10-29T09:43:24.0101774Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:245)
> 2020-10-29T09:43:24.0102360Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 2020-10-29T09:43:24.0103004Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 2020-10-29T09:43:24.0103737Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> 2020-10-29T09:43:24.0104301Z [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 2020-10-29T09:43:24.0104828Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 2020-10-29T09:43:24.0105334Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 2020-10-29T09:43:24.0105826Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
> 2020-10-29T09:43:24.0106384Z [ERROR] at 
> 

[jira] [Commented] (FLINK-19964) Gelly ITCase stuck on Azure in HITSITCase.testPrintWithRMatGraph

2020-11-08 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228381#comment-17228381
 ] 

Zhu Zhu commented on FLINK-19964:
-

The change FLINK-19189 to "enable pipelined region scheduling by default" has 
been merged since 09/24, which is even older.
So there may be another cause, or possibly there are multiple causes.

> Gelly ITCase stuck on Azure in HITSITCase.testPrintWithRMatGraph
> 
>
> Key: FLINK-19964
> URL: https://issues.apache.org/jira/browse/FLINK-19964
> Project: Flink
>  Issue Type: Bug
>  Components: Library / Graph Processing (Gelly), Runtime / Network, 
> Tests
>Affects Versions: 1.12.0
>Reporter: Chesnay Schepler
>Assignee: Roman Khachatryan
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> The HITSITCase has gotten stuck on Azure. Chances are that something in the 
> scheduling or network has broken it.
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8919=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20054) 3 Level List is not supported in ParquetInputFormat

2020-11-08 Thread Zhenqiu Huang (Jira)
Zhenqiu Huang created FLINK-20054:
-

 Summary: 3 Level List is not supported in ParquetInputFormat
 Key: FLINK-20054
 URL: https://issues.apache.org/jira/browse/FLINK-20054
 Project: Flink
  Issue Type: Bug
  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
Affects Versions: 1.11.2
Reporter: Zhenqiu Huang


In flink-parquet module doesn't support
reading a 3-level list type in parquet though it is able to process a
2-level list type.

3-level

optional group my_list (LIST) {
  repeated group element {
required binary str (UTF8);
  };
}


 2-level

optional group my_list (LIST) {
  repeated int32 element;
}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13972: [FLINK-19912][json] Fix JSON format fails to serialize map value with…

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13972:
URL: https://github.com/apache/flink/pull/13972#issuecomment-723244452


   
   ## CI report:
   
   * 18bd1620f55d1363b4d3173dc2e9c14e83ba859b UNKNOWN
   * 951191135eee7d3f7fddea2a5e296dbd36716dd3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9312)
 
   * f7b60d2413a5a86136b1d00d319dca668787559d UNKNOWN
   * b2dcd73cb70c5a3a1628aeff8dc1fb8e05d6fd53 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19635) HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result mismatch

2020-11-08 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228379#comment-17228379
 ] 

Leonard Xu commented on FLINK-19635:


I checked the failed tests in this ticket and   
https://issues.apache.org/jira/browse/FLINK-19615,
both them happened in *hbase2* connector,thus I doubt the unstable tests may 
relate to hbase2 testing cluster(HBaseTestingClusterAutoStarter).

[~mgergely] Do you have any insight?  

> HBaseConnectorITCase.testTableSourceSinkWithDDL is unstable with a result 
> mismatch
> --
>
> Key: FLINK-19635
> URL: https://issues.apache.org/jira/browse/FLINK-19635
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Leonard Xu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=7562=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {code}
> 2020-10-14T04:35:36.9268975Z testTableSourceSinkWithDDL[planner = 
> BLINK_PLANNER, legacy = 
> false](org.apache.flink.connector.hbase2.HBaseConnectorITCase)  Time elapsed: 
> 3.131 sec  <<< FAILURE!
> 2020-10-14T04:35:36.9276246Z java.lang.AssertionError: 
> expected:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003,
>  
> 4,40,null,400,4.04,true,Welt-4,2019-08-18T19:03,2019-08-18,19:03,12345678.0004,
>  
> 5,50,Hello-5,500,5.05,false,Welt-5,2019-08-19T19:10,2019-08-19,19:10,12345678.0005,
>  
> 6,60,Hello-6,600,6.06,true,Welt-6,2019-08-19T19:20,2019-08-19,19:20,12345678.0006,
>  
> 7,70,Hello-7,700,7.07,false,Welt-7,2019-08-19T19:30,2019-08-19,19:30,12345678.0007,
>  
> 8,80,null,800,8.08,true,Welt-8,2019-08-19T19:40,2019-08-19,19:40,12345678.0008]>
>  but 
> was:<[1,10,Hello-1,100,1.01,false,Welt-1,2019-08-18T19:00,2019-08-18,19:00,12345678.0001,
>  
> 2,20,Hello-2,200,2.02,true,Welt-2,2019-08-18T19:01,2019-08-18,19:01,12345678.0002,
>  
> 3,30,Hello-3,300,3.03,false,Welt-3,2019-08-18T19:02,2019-08-18,19:02,12345678.0003]>
> 2020-10-14T04:35:36.9281340Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-10-14T04:35:36.9282023Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-10-14T04:35:36.9328385Z  at 
> org.junit.Assert.assertEquals(Assert.java:118)
> 2020-10-14T04:35:36.9338939Z  at 
> org.junit.Assert.assertEquals(Assert.java:144)
> 2020-10-14T04:35:36.9339880Z  at 
> org.apache.flink.connector.hbase2.HBaseConnectorITCase.testTableSourceSinkWithDDL(HBaseConnectorITCase.java:449)
> 2020-10-14T04:35:36.9341003Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20011) PageRankITCase.testPrintWithRMatGraph hangs

2020-11-08 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228375#comment-17228375
 ] 

Robert Metzger commented on FLINK-20011:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9295=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5=12601

> PageRankITCase.testPrintWithRMatGraph hangs
> ---
>
> Key: FLINK-20011
> URL: https://issues.apache.org/jira/browse/FLINK-20011
> Project: Flink
>  Issue Type: Bug
>  Components: Library / Graph Processing (Gelly), Runtime / 
> Coordination, Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9121=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19]
> {code}
> 2020-11-05T22:42:34.4186647Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa98c00b800 nid=0x32f8 waiting on condition [0x7fa995c12000] 
> 2020-11-05T22:42:34.4187168Z java.lang.Thread.State: WAITING (parking) 
> 2020-11-05T22:42:34.4187563Z at sun.misc.Unsafe.park(Native Method) 
> 2020-11-05T22:42:34.4188246Z - parking to wait for <0x8736d120> (a 
> java.util.concurrent.CompletableFuture$Signaller) 
> 2020-11-05T22:42:34.411Z at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> 2020-11-05T22:42:34.4189351Z at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>  2020-11-05T22:42:34.4189930Z at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) 
> 2020-11-05T22:42:34.4190509Z at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>  2020-11-05T22:42:34.4191059Z at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) 
> 2020-11-05T22:42:34.4191591Z at 
> org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:893)
>  2020-11-05T22:42:34.4192208Z at 
> org.apache.flink.graph.asm.dataset.DataSetAnalyticBase.execute(DataSetAnalyticBase.java:55)
>  2020-11-05T22:42:34.4192787Z at 
> org.apache.flink.graph.drivers.output.Print.write(Print.java:48) 
> 2020-11-05T22:42:34.4193373Z at 
> org.apache.flink.graph.Runner.execute(Runner.java:454) 
> 2020-11-05T22:42:34.4194156Z at 
> org.apache.flink.graph.Runner.main(Runner.java:507) 
> 2020-11-05T22:42:34.4194618Z at 
> org.apache.flink.graph.drivers.DriverBaseITCase.getSystemOutput(DriverBaseITCase.java:208)
>  2020-11-05T22:42:34.4195192Z at 
> org.apache.flink.graph.drivers.DriverBaseITCase.expectedCount(DriverBaseITCase.java:100)
>  2020-11-05T22:42:34.4195914Z at 
> org.apache.flink.graph.drivers.PageRankITCase.testPrintWithRMatGraph(PageRankITCase.java:60)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20052) kafka/gelly pre-commit test failed due to process hang

2020-11-08 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228374#comment-17228374
 ] 

Robert Metzger commented on FLINK-20052:


>From the stacktraces, it looks like {{PageRankITCase.testPrintWithRMatGraph}} 
>hangs in this case. This ticket is then a duplicate of: FLINK-20011. I'll 
>close this one.

> kafka/gelly pre-commit test failed due to process hang
> --
>
> Key: FLINK-20052
> URL: https://issues.apache.org/jira/browse/FLINK-20052
> Project: Flink
>  Issue Type: Test
>  Components: Library / Graph Processing (Gelly), Tests
>Affects Versions: 1.12.0
>Reporter: Yu Li
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> The kafka/gelly test failed in recent pre-commit [Azure 
> build|https://s.apache.org/flink-kafka-gelly-failure]:
> {noformat}
> Process produced no output for 900 seconds.
> ...
> ##[error]Bash exited with code '143'.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20052) kafka/gelly pre-commit test failed due to process hang

2020-11-08 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-20052.
--
Fix Version/s: (was: 1.12.0)
   Resolution: Duplicate

> kafka/gelly pre-commit test failed due to process hang
> --
>
> Key: FLINK-20052
> URL: https://issues.apache.org/jira/browse/FLINK-20052
> Project: Flink
>  Issue Type: Test
>  Components: Library / Graph Processing (Gelly), Tests
>Affects Versions: 1.12.0
>Reporter: Yu Li
>Priority: Major
>  Labels: test-stability
>
> The kafka/gelly test failed in recent pre-commit [Azure 
> build|https://s.apache.org/flink-kafka-gelly-failure]:
> {noformat}
> Process produced no output for 900 seconds.
> ...
> ##[error]Bash exited with code '143'.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13972: [FLINK-19912][json] Fix JSON format fails to serialize map value with…

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13972:
URL: https://github.com/apache/flink/pull/13972#issuecomment-723244452


   
   ## CI report:
   
   * 18bd1620f55d1363b4d3173dc2e9c14e83ba859b UNKNOWN
   * 951191135eee7d3f7fddea2a5e296dbd36716dd3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9312)
 
   * f7b60d2413a5a86136b1d00d319dca668787559d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19973) 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: yarn-per-job` config

2020-11-08 Thread Kostas Kloudas (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228363#comment-17228363
 ] 

Kostas Kloudas commented on FLINK-19973:


Thanks a lot [~zhisheng]!

> 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: 
> yarn-per-job` config
> --
>
> Key: FLINK-19973
> URL: https://issues.apache.org/jira/browse/FLINK-19973
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Assignee: Kostas Kloudas
>Priority: Major
> Attachments: image-2020-11-04-20-58-49-738.png, 
> image-2020-11-04-21-00-06-180.png
>
>
> when i use flink-sql-client to deploy job to yarn(per job mod), I set 
> `execution.target: yarn-per-job` in flink-conf.yaml, job will deploy to yarn.
>  
> when I deploy jar job to yarn, The command is `./bin/flink run -m 
> yarn-cluster -ynm flink-1.12-test  -ytm 3g -yjm 3g 
> examples/streaming/StateMachineExample.jar`, it will deploy ok, but the 
> `-ynm`、`-ytm 3g` and `-yjm 3g` doesn't work. 
>  
> !image-2020-11-04-20-58-49-738.png|width=912,height=235!
>  
>  
> when i remove the config `execution.target: yarn-per-job`, it work well.
>  
> !image-2020-11-04-21-00-06-180.png|width=1047,height=150!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15959) Add min number of slots configuration to limit total number of slots

2020-11-08 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228359#comment-17228359
 ] 

Yangze Guo commented on FLINK-15959:


[~trohrmann] Sorry for the belated reply. No, I'll change the fixed version to 
1.13.

> Add min number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Assignee: Yangze Guo
>Priority: Major
> Fix For: 1.12.0
>
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15959) Add min number of slots configuration to limit total number of slots

2020-11-08 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo updated FLINK-15959:
---
Fix Version/s: (was: 1.12.0)
   1.13.0

> Add min number of slots configuration to limit total number of slots
> 
>
> Key: FLINK-15959
> URL: https://issues.apache.org/jira/browse/FLINK-15959
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.11.0
>Reporter: YufeiLiu
>Assignee: Yangze Guo
>Priority: Major
> Fix For: 1.13.0
>
>
> Flink removed `-n` option after FLIP-6, change to ResourceManager start a new 
> worker when required. But I think maintain a certain amount of slots is 
> necessary. These workers will start immediately when ResourceManager starts 
> and would not release even if all slots are free.
> Here are some resons:
> # Users actually know how many resources are needed when run a single job, 
> initialize all workers when cluster starts can speed up startup process.
> # Job schedule in  topology order,  next operator won't schedule until prior 
> execution slot allocated. The TaskExecutors will start in several batchs in 
> some cases, it might slow down the startup speed.
> # Flink support 
> [FLINK-12122|https://issues.apache.org/jira/browse/FLINK-12122] [Spread out 
> tasks evenly across all available registered TaskManagers], but it will only 
> effect if all TMs are registered. Start all TMs at begining can slove this 
> problem.
> *suggestion:*
> * Add config "taskmanager.minimum.numberOfTotalSlots" and 
> "taskmanager.maximum.numberOfTotalSlots", default behavior is still like 
> before.
> * Start plenty number of workers to satisfy minimum slots when 
> ResourceManager accept leadership(subtract recovered workers).
> * Don't comlete slot request until minimum number of slots are registered, 
> and throw exeception when exceed maximum.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20050) SourceCoordinatorProviderTest.testCheckpointAndReset failed with NullPointerException

2020-11-08 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-20050:
---
Priority: Critical  (was: Major)

> SourceCoordinatorProviderTest.testCheckpointAndReset failed with 
> NullPointerException
> -
>
> Key: FLINK-20050
> URL: https://issues.apache.org/jira/browse/FLINK-20050
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9322=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392
> {code}
> 2020-11-08T22:24:39.5642544Z [ERROR] 
> testCheckpointAndReset(org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest)
>   Time elapsed: 0.954 s  <<< ERROR!
> 2020-11-08T22:24:39.5643055Z java.lang.NullPointerException
> 2020-11-08T22:24:39.5643578Z  at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest.testCheckpointAndReset(SourceCoordinatorProviderTest.java:94)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger commented on a change in pull request #13978: [FLINK-20033] Ensure that stopping a JobMaster will suspend the running job

2020-11-08 Thread GitBox


rmetzger commented on a change in pull request #13978:
URL: https://github.com/apache/flink/pull/13978#discussion_r519570415



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/JobMaster.java
##
@@ -1212,23 +1208,13 @@ protected void onRegistrationFailure(final Throwable 
failure) {
 
private class JobManagerJobStatusListener implements JobStatusListener {
 
-   private volatile boolean running = true;
-
@Override
public void jobStatusChanges(
final JobID jobId,
final JobStatus newJobStatus,
final long timestamp,
final Throwable error) {
-
-   if (running) {
-   // run in rpc thread to avoid concurrency
-   runAsync(() -> jobStatusChanged(newJobStatus, 
timestamp, error));
-   }
-   }
-
-   private void stop() {
-   running = false;
+   jobStatusChanged(newJobStatus, timestamp, error);

Review comment:
   It's not really related to this change, but since you are touching the 
code: The `timestamp` and `error` parameters are not used in `jobStatusChanged`.
   At least in the PR for master, we could address that?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20051) SourceReaderTestBase.testAddSplitToExistingFetcher failed with NullPointerException

2020-11-08 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228355#comment-17228355
 ] 

Robert Metzger commented on FLINK-20051:


[~becket_qin] can you take a look at this failure?

> SourceReaderTestBase.testAddSplitToExistingFetcher failed with 
> NullPointerException
> ---
>
> Key: FLINK-20051
> URL: https://issues.apache.org/jira/browse/FLINK-20051
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9322=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518
> {code}
> 2020-11-08T21:49:29.6792941Z [ERROR] 
> testAddSplitToExistingFetcher(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 0.632 s  <<< ERROR!
> 2020-11-08T21:49:29.6793408Z java.lang.NullPointerException
> 2020-11-08T21:49:29.6793998Z  at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader$KafkaPartitionSplitRecords.nextSplit(KafkaPartitionSplitReader.java:363)
> 2020-11-08T21:49:29.6795970Z  at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.moveToNextSplit(SourceReaderBase.java:187)
> 2020-11-08T21:49:29.6796596Z  at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:159)
> 2020-11-08T21:49:29.6797317Z  at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
> 2020-11-08T21:49:29.6797942Z  at 
> org.apache.flink.connector.testutils.source.reader.SourceReaderTestBase.testAddSplitToExistingFetcher(SourceReaderTestBase.java:98)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20051) SourceReaderTestBase.testAddSplitToExistingFetcher failed with NullPointerException

2020-11-08 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-20051:
---
Priority: Critical  (was: Major)

> SourceReaderTestBase.testAddSplitToExistingFetcher failed with 
> NullPointerException
> ---
>
> Key: FLINK-20051
> URL: https://issues.apache.org/jira/browse/FLINK-20051
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9322=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518
> {code}
> 2020-11-08T21:49:29.6792941Z [ERROR] 
> testAddSplitToExistingFetcher(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 0.632 s  <<< ERROR!
> 2020-11-08T21:49:29.6793408Z java.lang.NullPointerException
> 2020-11-08T21:49:29.6793998Z  at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader$KafkaPartitionSplitRecords.nextSplit(KafkaPartitionSplitReader.java:363)
> 2020-11-08T21:49:29.6795970Z  at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.moveToNextSplit(SourceReaderBase.java:187)
> 2020-11-08T21:49:29.6796596Z  at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:159)
> 2020-11-08T21:49:29.6797317Z  at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
> 2020-11-08T21:49:29.6797942Z  at 
> org.apache.flink.connector.testutils.source.reader.SourceReaderTestBase.testAddSplitToExistingFetcher(SourceReaderTestBase.java:98)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13992: [FLINK-20035][tests] Use random port for rest endpoint in BlockingShuffleITCase and ShuffleCompressionITCase

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13992:
URL: https://github.com/apache/flink/pull/13992#issuecomment-723768653


   
   ## CI report:
   
   * 7bde630b8b67c9c31c49ec590ce58c09741f5a3c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9334)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19964) Gelly ITCase stuck on Azure in HITSITCase.testPrintWithRMatGraph

2020-11-08 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228354#comment-17228354
 ] 

Robert Metzger commented on FLINK-19964:


The iterations deadlocks are happening quite frequently now. The commit Roman 
found bisecting is quite old. I believe a more recent change to the pipelined 
region scheduling is more likely to cause this instability.

> Gelly ITCase stuck on Azure in HITSITCase.testPrintWithRMatGraph
> 
>
> Key: FLINK-19964
> URL: https://issues.apache.org/jira/browse/FLINK-19964
> Project: Flink
>  Issue Type: Bug
>  Components: Library / Graph Processing (Gelly), Runtime / Network, 
> Tests
>Affects Versions: 1.12.0
>Reporter: Chesnay Schepler
>Assignee: Roman Khachatryan
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> The HITSITCase has gotten stuck on Azure. Chances are that something in the 
> scheduling or network has broken it.
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8919=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20035) BlockingShuffleITCase unstable with "Could not start rest endpoint on any port in port range 8081"

2020-11-08 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228353#comment-17228353
 ] 

Robert Metzger commented on FLINK-20035:


Thanks a lot for looking into it. I assigned you to the ticket.

> BlockingShuffleITCase unstable with "Could not start rest endpoint on any 
> port in port range 8081"
> --
>
> Key: FLINK-20035
> URL: https://issues.apache.org/jira/browse/FLINK-20035
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9178=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0
> {code}
> 2020-11-06T13:52:56.6369221Z [ERROR] 
> testBoundedBlockingShuffle(org.apache.flink.test.runtime.BlockingShuffleITCase)
>   Time elapsed: 3.522 s  <<< ERROR!
> 2020-11-06T13:52:56.6370005Z org.apache.flink.util.FlinkException: Could not 
> create the DispatcherResourceManagerComponent.
> 2020-11-06T13:52:56.6370649Z  at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:257)
> 2020-11-06T13:52:56.6371371Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.createDispatcherResourceManagerComponents(MiniCluster.java:412)
> 2020-11-06T13:52:56.6372258Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.setupDispatcherResourceManagerComponents(MiniCluster.java:378)
> 2020-11-06T13:52:56.6373276Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:334)
> 2020-11-06T13:52:56.6374182Z  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:50)
> 2020-11-06T13:52:56.6375055Z  at 
> org.apache.flink.test.runtime.BlockingShuffleITCase.testBoundedBlockingShuffle(BlockingShuffleITCase.java:53)
> 2020-11-06T13:52:56.6375787Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-06T13:52:56.6376546Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-06T13:52:56.6377514Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-06T13:52:56.6378008Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-06T13:52:56.6378774Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-06T13:52:56.6379350Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-06T13:52:56.6458094Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-06T13:52:56.6459047Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-06T13:52:56.6459678Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-11-06T13:52:56.6460182Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-11-06T13:52:56.6460770Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-11-06T13:52:56.6461210Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-06T13:52:56.6461649Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-06T13:52:56.6462089Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-06T13:52:56.6462736Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-06T13:52:56.6463286Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-06T13:52:56.6463728Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-06T13:52:56.6464344Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-11-06T13:52:56.6464918Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-11-06T13:52:56.6465428Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-11-06T13:52:56.6465915Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-11-06T13:52:56.6466405Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-11-06T13:52:56.6467050Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-11-06T13:52:56.6468341Z  at 
> 

[jira] [Assigned] (FLINK-20035) BlockingShuffleITCase unstable with "Could not start rest endpoint on any port in port range 8081"

2020-11-08 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-20035:
--

Assignee: Yingjie Cao

> BlockingShuffleITCase unstable with "Could not start rest endpoint on any 
> port in port range 8081"
> --
>
> Key: FLINK-20035
> URL: https://issues.apache.org/jira/browse/FLINK-20035
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Yingjie Cao
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9178=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0
> {code}
> 2020-11-06T13:52:56.6369221Z [ERROR] 
> testBoundedBlockingShuffle(org.apache.flink.test.runtime.BlockingShuffleITCase)
>   Time elapsed: 3.522 s  <<< ERROR!
> 2020-11-06T13:52:56.6370005Z org.apache.flink.util.FlinkException: Could not 
> create the DispatcherResourceManagerComponent.
> 2020-11-06T13:52:56.6370649Z  at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:257)
> 2020-11-06T13:52:56.6371371Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.createDispatcherResourceManagerComponents(MiniCluster.java:412)
> 2020-11-06T13:52:56.6372258Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.setupDispatcherResourceManagerComponents(MiniCluster.java:378)
> 2020-11-06T13:52:56.6373276Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:334)
> 2020-11-06T13:52:56.6374182Z  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:50)
> 2020-11-06T13:52:56.6375055Z  at 
> org.apache.flink.test.runtime.BlockingShuffleITCase.testBoundedBlockingShuffle(BlockingShuffleITCase.java:53)
> 2020-11-06T13:52:56.6375787Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-06T13:52:56.6376546Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-06T13:52:56.6377514Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-06T13:52:56.6378008Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-06T13:52:56.6378774Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-06T13:52:56.6379350Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-06T13:52:56.6458094Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-06T13:52:56.6459047Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-06T13:52:56.6459678Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-11-06T13:52:56.6460182Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-11-06T13:52:56.6460770Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-11-06T13:52:56.6461210Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-06T13:52:56.6461649Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-06T13:52:56.6462089Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-06T13:52:56.6462736Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-06T13:52:56.6463286Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-06T13:52:56.6463728Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-06T13:52:56.6464344Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-11-06T13:52:56.6464918Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-11-06T13:52:56.6465428Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-11-06T13:52:56.6465915Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-11-06T13:52:56.6466405Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-11-06T13:52:56.6467050Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-11-06T13:52:56.6468341Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> 2020-11-06T13:52:56.6468794Z  at 
> 

[GitHub] [flink] flinkbot commented on pull request #13992: [FLINK-20035][tests] Use random port for rest endpoint in BlockingShuffleITCase and ShuffleCompressionITCase

2020-11-08 Thread GitBox


flinkbot commented on pull request #13992:
URL: https://github.com/apache/flink/pull/13992#issuecomment-723768653


   
   ## CI report:
   
   * 7bde630b8b67c9c31c49ec590ce58c09741f5a3c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13992: [FLINK-20035][tests] Use random port for rest endpoint in BlockingShuffleITCase and ShuffleCompressionITCase

2020-11-08 Thread GitBox


flinkbot commented on pull request #13992:
URL: https://github.com/apache/flink/pull/13992#issuecomment-723763452


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7bde630b8b67c9c31c49ec590ce58c09741f5a3c (Mon Nov 09 
05:24:56 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-20035).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19983) ShuffleCompressionITCase.testDataCompressionForSortMergeBlockingShuffle unstable

2020-11-08 Thread Yingjie Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yingjie Cao updated FLINK-19983:

Fix Version/s: 1.12.0

> ShuffleCompressionITCase.testDataCompressionForSortMergeBlockingShuffle 
> unstable
> 
>
> Key: FLINK-19983
> URL: https://issues.apache.org/jira/browse/FLINK-19983
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Yingjie Cao
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8997=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0
> {code}
> 2020-11-04T14:32:19.7227316Z [ERROR] Tests run: 4, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 16.882 s <<< FAILURE! - in 
> org.apache.flink.test.runtime.ShuffleCompressionITCase
> 2020-11-04T14:32:19.7228708Z [ERROR] 
> testDataCompressionForSortMergeBlockingShuffle[useBroadcastPartitioner = 
> true](org.apache.flink.test.runtime.ShuffleCompressionITCase)  Time elapsed: 
> 5.058 s  <<< FAILURE!
> 2020-11-04T14:32:19.7230032Z java.lang.AssertionError: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-11-04T14:32:19.7230580Z  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:58)
> 2020-11-04T14:32:19.7231173Z  at 
> org.apache.flink.test.runtime.ShuffleCompressionITCase.testDataCompressionForSortMergeBlockingShuffle(ShuffleCompressionITCase.java:98)
> 2020-11-04T14:32:19.7232076Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-04T14:32:19.7232624Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-04T14:32:19.7233242Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-04T14:32:19.7233741Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-04T14:32:19.7234353Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-04T14:32:19.7235141Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-04T14:32:19.7238521Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-04T14:32:19.7239371Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-04T14:32:19.7240010Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-11-04T14:32:19.7240688Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-11-04T14:32:19.7241396Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-11-04T14:32:19.7242019Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-04T14:32:19.7242623Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-04T14:32:19.7243379Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-04T14:32:19.7244051Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-04T14:32:19.7244631Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-04T14:32:19.7245313Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-04T14:32:19.7245844Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-11-04T14:32:19.7246341Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-11-04T14:32:19.7246868Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-04T14:32:19.7247616Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-04T14:32:19.7248223Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-04T14:32:19.7248826Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-04T14:32:19.7249393Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-04T14:32:19.7249963Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-04T14:32:19.7250586Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-11-04T14:32:19.7251277Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-11-04T14:32:19.7252024Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-11-04T14:32:19.7252839Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 

[jira] [Updated] (FLINK-20035) BlockingShuffleITCase unstable with "Could not start rest endpoint on any port in port range 8081"

2020-11-08 Thread Yingjie Cao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yingjie Cao updated FLINK-20035:

Fix Version/s: 1.12.0

> BlockingShuffleITCase unstable with "Could not start rest endpoint on any 
> port in port range 8081"
> --
>
> Key: FLINK-20035
> URL: https://issues.apache.org/jira/browse/FLINK-20035
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9178=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0
> {code}
> 2020-11-06T13:52:56.6369221Z [ERROR] 
> testBoundedBlockingShuffle(org.apache.flink.test.runtime.BlockingShuffleITCase)
>   Time elapsed: 3.522 s  <<< ERROR!
> 2020-11-06T13:52:56.6370005Z org.apache.flink.util.FlinkException: Could not 
> create the DispatcherResourceManagerComponent.
> 2020-11-06T13:52:56.6370649Z  at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:257)
> 2020-11-06T13:52:56.6371371Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.createDispatcherResourceManagerComponents(MiniCluster.java:412)
> 2020-11-06T13:52:56.6372258Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.setupDispatcherResourceManagerComponents(MiniCluster.java:378)
> 2020-11-06T13:52:56.6373276Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:334)
> 2020-11-06T13:52:56.6374182Z  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:50)
> 2020-11-06T13:52:56.6375055Z  at 
> org.apache.flink.test.runtime.BlockingShuffleITCase.testBoundedBlockingShuffle(BlockingShuffleITCase.java:53)
> 2020-11-06T13:52:56.6375787Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-06T13:52:56.6376546Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-06T13:52:56.6377514Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-06T13:52:56.6378008Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-06T13:52:56.6378774Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-06T13:52:56.6379350Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-06T13:52:56.6458094Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-06T13:52:56.6459047Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-06T13:52:56.6459678Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-11-06T13:52:56.6460182Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-11-06T13:52:56.6460770Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-11-06T13:52:56.6461210Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-06T13:52:56.6461649Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-06T13:52:56.6462089Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-06T13:52:56.6462736Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-06T13:52:56.6463286Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-06T13:52:56.6463728Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-06T13:52:56.6464344Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-11-06T13:52:56.6464918Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-11-06T13:52:56.6465428Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-11-06T13:52:56.6465915Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-11-06T13:52:56.6466405Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-11-06T13:52:56.6467050Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-11-06T13:52:56.6468341Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> 2020-11-06T13:52:56.6468794Z  at 
> 

[jira] [Commented] (FLINK-20035) BlockingShuffleITCase unstable with "Could not start rest endpoint on any port in port range 8081"

2020-11-08 Thread Yingjie Cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228349#comment-17228349
 ] 

Yingjie Cao commented on FLINK-20035:
-

This failure is caused by port bind exception. I have submitted a PR to use 
random port for BlockingShuffleITCase and ShuffleCompressionITCase.

> BlockingShuffleITCase unstable with "Could not start rest endpoint on any 
> port in port range 8081"
> --
>
> Key: FLINK-20035
> URL: https://issues.apache.org/jira/browse/FLINK-20035
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9178=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0
> {code}
> 2020-11-06T13:52:56.6369221Z [ERROR] 
> testBoundedBlockingShuffle(org.apache.flink.test.runtime.BlockingShuffleITCase)
>   Time elapsed: 3.522 s  <<< ERROR!
> 2020-11-06T13:52:56.6370005Z org.apache.flink.util.FlinkException: Could not 
> create the DispatcherResourceManagerComponent.
> 2020-11-06T13:52:56.6370649Z  at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:257)
> 2020-11-06T13:52:56.6371371Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.createDispatcherResourceManagerComponents(MiniCluster.java:412)
> 2020-11-06T13:52:56.6372258Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.setupDispatcherResourceManagerComponents(MiniCluster.java:378)
> 2020-11-06T13:52:56.6373276Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:334)
> 2020-11-06T13:52:56.6374182Z  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:50)
> 2020-11-06T13:52:56.6375055Z  at 
> org.apache.flink.test.runtime.BlockingShuffleITCase.testBoundedBlockingShuffle(BlockingShuffleITCase.java:53)
> 2020-11-06T13:52:56.6375787Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-06T13:52:56.6376546Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-06T13:52:56.6377514Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-06T13:52:56.6378008Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-06T13:52:56.6378774Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-06T13:52:56.6379350Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-06T13:52:56.6458094Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-06T13:52:56.6459047Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-06T13:52:56.6459678Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-11-06T13:52:56.6460182Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-11-06T13:52:56.6460770Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-11-06T13:52:56.6461210Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-06T13:52:56.6461649Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-06T13:52:56.6462089Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-06T13:52:56.6462736Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-06T13:52:56.6463286Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-06T13:52:56.6463728Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-06T13:52:56.6464344Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-11-06T13:52:56.6464918Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-11-06T13:52:56.6465428Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-11-06T13:52:56.6465915Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-11-06T13:52:56.6466405Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-11-06T13:52:56.6467050Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-11-06T13:52:56.6468341Z  at 
> 

[jira] [Updated] (FLINK-20035) BlockingShuffleITCase unstable with "Could not start rest endpoint on any port in port range 8081"

2020-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20035:
---
Labels: pull-request-available test-stability  (was: test-stability)

> BlockingShuffleITCase unstable with "Could not start rest endpoint on any 
> port in port range 8081"
> --
>
> Key: FLINK-20035
> URL: https://issues.apache.org/jira/browse/FLINK-20035
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9178=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0
> {code}
> 2020-11-06T13:52:56.6369221Z [ERROR] 
> testBoundedBlockingShuffle(org.apache.flink.test.runtime.BlockingShuffleITCase)
>   Time elapsed: 3.522 s  <<< ERROR!
> 2020-11-06T13:52:56.6370005Z org.apache.flink.util.FlinkException: Could not 
> create the DispatcherResourceManagerComponent.
> 2020-11-06T13:52:56.6370649Z  at 
> org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:257)
> 2020-11-06T13:52:56.6371371Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.createDispatcherResourceManagerComponents(MiniCluster.java:412)
> 2020-11-06T13:52:56.6372258Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.setupDispatcherResourceManagerComponents(MiniCluster.java:378)
> 2020-11-06T13:52:56.6373276Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:334)
> 2020-11-06T13:52:56.6374182Z  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:50)
> 2020-11-06T13:52:56.6375055Z  at 
> org.apache.flink.test.runtime.BlockingShuffleITCase.testBoundedBlockingShuffle(BlockingShuffleITCase.java:53)
> 2020-11-06T13:52:56.6375787Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-06T13:52:56.6376546Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-06T13:52:56.6377514Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-06T13:52:56.6378008Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-06T13:52:56.6378774Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-06T13:52:56.6379350Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-06T13:52:56.6458094Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-06T13:52:56.6459047Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-06T13:52:56.6459678Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-11-06T13:52:56.6460182Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-11-06T13:52:56.6460770Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-11-06T13:52:56.6461210Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-06T13:52:56.6461649Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-06T13:52:56.6462089Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-06T13:52:56.6462736Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-06T13:52:56.6463286Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-06T13:52:56.6463728Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-06T13:52:56.6464344Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-11-06T13:52:56.6464918Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-11-06T13:52:56.6465428Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-11-06T13:52:56.6465915Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
> 2020-11-06T13:52:56.6466405Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> 2020-11-06T13:52:56.6467050Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 2020-11-06T13:52:56.6468341Z  at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
> 2020-11-06T13:52:56.6468794Z  at 
> 

[GitHub] [flink] wsry opened a new pull request #13992: [FLINK-20035][tests] Use random port for rest endpoint in BlockingShuffleITCase and ShuffleCompressionITCase

2020-11-08 Thread GitBox


wsry opened a new pull request #13992:
URL: https://github.com/apache/flink/pull/13992


   ## What is the purpose of the change
   
   Use random port for rest endpoint in BlockingShuffleITCase and 
ShuffleCompressionITCase to make the cases more stable.
   
   ## Brief change log
   
 - Use random port for rest endpoint in BlockingShuffleITCase and 
ShuffleCompressionITCase to make the cases more stable.
   
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
BlockingShuffleITCase and ShuffleCompressionITCase.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-20016) Support TimestampAssigner and WatermarkGenerator for Python DataStream API.

2020-11-08 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-20016:
---

Assignee: Shuiqiang Chen

> Support TimestampAssigner and WatermarkGenerator for Python DataStream API.
> ---
>
> Key: FLINK-20016
> URL: https://issues.apache.org/jira/browse/FLINK-20016
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Shuiqiang Chen
>Assignee: Shuiqiang Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-20016) Support TimestampAssigner and WatermarkGenerator for Python DataStream API.

2020-11-08 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-20016.
---
Resolution: Fixed

Merged to master via f24cb3f3b7e773706188ae92998b3e1ffbf1829e

> Support TimestampAssigner and WatermarkGenerator for Python DataStream API.
> ---
>
> Key: FLINK-20016
> URL: https://issues.apache.org/jira/browse/FLINK-20016
> Project: Flink
>  Issue Type: Improvement
>  Components: API / Python
>Reporter: Shuiqiang Chen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu closed pull request #13986: [FLINK-20016][python] Support TimestampAssigner and WatermarkGenerator for Python DataStream API.

2020-11-08 Thread GitBox


dianfu closed pull request #13986:
URL: https://github.com/apache/flink/pull/13986


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13990: [FLINK-20053][table][doc] Add document for file compaction

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13990:
URL: https://github.com/apache/flink/pull/13990#issuecomment-723733200


   
   ## CI report:
   
   * bcabafeacefca7370b3c2f3569ff8608704a282b Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9331)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13988: [FLINK-19265][FLINK-20049][core] Source API final adjustments

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13988:
URL: https://github.com/apache/flink/pull/13988#issuecomment-723691966


   
   ## CI report:
   
   * 1d621a3f57b64dbbd9f645347267aef8b245ca74 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9324)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13991: [FLINK-19983][network] Remove wrong state checking in SortMergeSubpartitionReader

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13991:
URL: https://github.com/apache/flink/pull/13991#issuecomment-723737863


   
   ## CI report:
   
   * 9d1b60190d1e7c588b225301792b63789227a91b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9332)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19802) Let BulkFormat createReader and restoreReader methods accept Splits directly

2020-11-08 Thread Steven Zhen Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228338#comment-17228338
 ] 

Steven Zhen Wu commented on FLINK-19802:


[~sewen] sorry, I didn't mean that we need this before the 1.12.0 release. Yes, 
currently I am also thinking about just reusing some components like 
BulkFormat/reader.

> Let BulkFormat createReader and restoreReader methods accept Splits directly
> 
>
> Key: FLINK-19802
> URL: https://issues.apache.org/jira/browse/FLINK-19802
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / FileSystem
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Major
> Fix For: 1.12.0
>
>
> To support sources where the splits communicate additional information, the 
> BulkFormats should accept a generic split type, instead of path/offset/length 
> from the splits directly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13991: [FLINK-19983][network] Remove wrong state checking in SortMergeSubpartitionReader

2020-11-08 Thread GitBox


flinkbot commented on pull request #13991:
URL: https://github.com/apache/flink/pull/13991#issuecomment-723737863


   
   ## CI report:
   
   * 9d1b60190d1e7c588b225301792b63789227a91b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13989: [FLINK-19448][connector/common] Synchronize fetchers.isEmpty status to SourceReaderBase using elementsQueue.notifyAvailable()

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13989:
URL: https://github.com/apache/flink/pull/13989#issuecomment-723709607


   
   ## CI report:
   
   * 5da43fd50d04596217e89f547f97dbb4711ca7ee Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9325)
 
   * 7769826c671c1c7e081128ab1a5cf74819808456 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9330)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13986: [FLINK-20016][python] Support TimestampAssigner and WatermarkGenerator for Python DataStream API.

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13986:
URL: https://github.com/apache/flink/pull/13986#issuecomment-723648682


   
   ## CI report:
   
   * 6afd298cd5356502822e36537d7a1c8ba67052b5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9319)
 
   * 07eebefc93a1ea2a4cc90ece59125a3b82970be8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9326)
 
   * 80a45e8f1c6942ab89096b1bc277ae3990ffce4d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9329)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13990: [FLINK-20053][table][doc] Add document for file compaction

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13990:
URL: https://github.com/apache/flink/pull/13990#issuecomment-723733200


   
   ## CI report:
   
   * bcabafeacefca7370b3c2f3569ff8608704a282b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9331)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19983) ShuffleCompressionITCase.testDataCompressionForSortMergeBlockingShuffle unstable

2020-11-08 Thread Yingjie Cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228333#comment-17228333
 ] 

Yingjie Cao commented on FLINK-19983:
-

After some investigation, I find that the state check is not true, we just need 
to remove it. The caller, including CreditBasedSequenceNumberingViewReader and 
LocalInputChannel can handle the case correctly. 
BoundedBlockingSubpartitionReader does the same thing. The PR is available for 
review now.

> ShuffleCompressionITCase.testDataCompressionForSortMergeBlockingShuffle 
> unstable
> 
>
> Key: FLINK-19983
> URL: https://issues.apache.org/jira/browse/FLINK-19983
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Yingjie Cao
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8997=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0
> {code}
> 2020-11-04T14:32:19.7227316Z [ERROR] Tests run: 4, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 16.882 s <<< FAILURE! - in 
> org.apache.flink.test.runtime.ShuffleCompressionITCase
> 2020-11-04T14:32:19.7228708Z [ERROR] 
> testDataCompressionForSortMergeBlockingShuffle[useBroadcastPartitioner = 
> true](org.apache.flink.test.runtime.ShuffleCompressionITCase)  Time elapsed: 
> 5.058 s  <<< FAILURE!
> 2020-11-04T14:32:19.7230032Z java.lang.AssertionError: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-11-04T14:32:19.7230580Z  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:58)
> 2020-11-04T14:32:19.7231173Z  at 
> org.apache.flink.test.runtime.ShuffleCompressionITCase.testDataCompressionForSortMergeBlockingShuffle(ShuffleCompressionITCase.java:98)
> 2020-11-04T14:32:19.7232076Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-04T14:32:19.7232624Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-04T14:32:19.7233242Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-04T14:32:19.7233741Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-04T14:32:19.7234353Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-04T14:32:19.7235141Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-04T14:32:19.7238521Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-04T14:32:19.7239371Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-04T14:32:19.7240010Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-11-04T14:32:19.7240688Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-11-04T14:32:19.7241396Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-11-04T14:32:19.7242019Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-04T14:32:19.7242623Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-04T14:32:19.7243379Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-04T14:32:19.7244051Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-04T14:32:19.7244631Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-04T14:32:19.7245313Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-04T14:32:19.7245844Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-11-04T14:32:19.7246341Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-11-04T14:32:19.7246868Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-04T14:32:19.7247616Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-04T14:32:19.7248223Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-04T14:32:19.7248826Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-04T14:32:19.7249393Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-04T14:32:19.7249963Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-04T14:32:19.7250586Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-11-04T14:32:19.7251277Z  at 
> 

[GitHub] [flink] flinkbot commented on pull request #13991: [FLINK-19983][network] Remove wrong state checking in SortMergeSubpartitionReader

2020-11-08 Thread GitBox


flinkbot commented on pull request #13991:
URL: https://github.com/apache/flink/pull/13991#issuecomment-723734174


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9d1b60190d1e7c588b225301792b63789227a91b (Mon Nov 09 
03:34:00 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-19983) ShuffleCompressionITCase.testDataCompressionForSortMergeBlockingShuffle unstable

2020-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-19983:
---
Labels: pull-request-available test-stability  (was: test-stability)

> ShuffleCompressionITCase.testDataCompressionForSortMergeBlockingShuffle 
> unstable
> 
>
> Key: FLINK-19983
> URL: https://issues.apache.org/jira/browse/FLINK-19983
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Yingjie Cao
>Priority: Critical
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8997=logs=5c8e7682-d68f-54d1-16a2-a09310218a49=f508e270-48d6-5f1e-3138-42a17e0714f0
> {code}
> 2020-11-04T14:32:19.7227316Z [ERROR] Tests run: 4, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 16.882 s <<< FAILURE! - in 
> org.apache.flink.test.runtime.ShuffleCompressionITCase
> 2020-11-04T14:32:19.7228708Z [ERROR] 
> testDataCompressionForSortMergeBlockingShuffle[useBroadcastPartitioner = 
> true](org.apache.flink.test.runtime.ShuffleCompressionITCase)  Time elapsed: 
> 5.058 s  <<< FAILURE!
> 2020-11-04T14:32:19.7230032Z java.lang.AssertionError: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-11-04T14:32:19.7230580Z  at 
> org.apache.flink.test.runtime.JobGraphRunningUtil.execute(JobGraphRunningUtil.java:58)
> 2020-11-04T14:32:19.7231173Z  at 
> org.apache.flink.test.runtime.ShuffleCompressionITCase.testDataCompressionForSortMergeBlockingShuffle(ShuffleCompressionITCase.java:98)
> 2020-11-04T14:32:19.7232076Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-11-04T14:32:19.7232624Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-11-04T14:32:19.7233242Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-11-04T14:32:19.7233741Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-11-04T14:32:19.7234353Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-11-04T14:32:19.7235141Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-11-04T14:32:19.7238521Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-11-04T14:32:19.7239371Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-11-04T14:32:19.7240010Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-11-04T14:32:19.7240688Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-11-04T14:32:19.7241396Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-11-04T14:32:19.7242019Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-04T14:32:19.7242623Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-04T14:32:19.7243379Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-04T14:32:19.7244051Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-04T14:32:19.7244631Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-04T14:32:19.7245313Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-04T14:32:19.7245844Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-11-04T14:32:19.7246341Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-11-04T14:32:19.7246868Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-11-04T14:32:19.7247616Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-11-04T14:32:19.7248223Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-11-04T14:32:19.7248826Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-11-04T14:32:19.7249393Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-11-04T14:32:19.7249963Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-11-04T14:32:19.7250586Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
> 2020-11-04T14:32:19.7251277Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
> 2020-11-04T14:32:19.7252024Z  at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
> 2020-11-04T14:32:19.7252839Z  at 
> 

[GitHub] [flink] becketqin commented on a change in pull request #13988: [FLINK-19265][FLINK-20049][core] Source API final adjustments

2020-11-08 Thread GitBox


becketqin commented on a change in pull request #13988:
URL: https://github.com/apache/flink/pull/13988#discussion_r519532305



##
File path: 
flink-core/src/main/java/org/apache/flink/api/connector/source/SplitEnumerator.java
##
@@ -41,12 +43,14 @@
void start();
 
/**
-* Handles the source event from the source reader.
+* Handles the request for a split. This method is called when the 
reader with the given subtask
+* id calls the {@link SourceReaderContext#sendSplitRequest()} method.
 *
 * @param subtaskId the subtask id of the source reader who sent the 
source event.
-* @param sourceEvent the source event from the source reader.
+* @param requesterHostname Optional, the hostname where the requesting 
task is running.
+*  This can be used to make split assignments 
locality-aware.
 */
-   void handleSourceEvent(int subtaskId, SourceEvent sourceEvent);
+   void handleSplitRequest(int subtaskId, @Nullable String 
requesterHostname);

Review comment:
   Minor: The `requesterHostName` seems not necessary. The implementation 
can retrieve this information from 
`SplitEnumeratorContext.registeredReaders().get(subtaskId)` where the hostname 
can be stored in the `location` field.
   
   Also, it seems that HostName should be a reader property instead of a 
request property. So maybe there is no need to include that in the 
`AddSplitsEvent` either.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wsry opened a new pull request #13991: [FLINK-19983][network] Remove wrong state checking in SortMergeSubpartitionReader

2020-11-08 Thread GitBox


wsry opened a new pull request #13991:
URL: https://github.com/apache/flink/pull/13991


   ## What is the purpose of the change
   
   Remove wrong state checking in SortMergeSubpartitionReader.
   
   ## Brief change log
   
 - Remove wrong state checking in SortMergeSubpartitionReader.
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as 
ShuffleCompressionITCase#testDataCompressionForSortMergeBlockingShuffle.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #13990: [FLINK-20053][table][doc] Add document for file compaction

2020-11-08 Thread GitBox


flinkbot commented on pull request #13990:
URL: https://github.com/apache/flink/pull/13990#issuecomment-723733200


   
   ## CI report:
   
   * bcabafeacefca7370b3c2f3569ff8608704a282b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13989: [FLINK-19448][connector/common] Synchronize fetchers.isEmpty status to SourceReaderBase using elementsQueue.notifyAvailable()

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13989:
URL: https://github.com/apache/flink/pull/13989#issuecomment-723709607


   
   ## CI report:
   
   * 5da43fd50d04596217e89f547f97dbb4711ca7ee Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9325)
 
   * 7769826c671c1c7e081128ab1a5cf74819808456 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13606: [FLINK-19554][e2e] Implement a unified testing framework for connectors

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13606:
URL: https://github.com/apache/flink/pull/13606#issuecomment-707535989


   
   ## CI report:
   
   * 7f5361cb816e2fd6e3abad7adcf9caaabd126d2e Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9327)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20052) kafka/gelly pre-commit test failed due to process hang

2020-11-08 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated FLINK-20052:
--
Issue Type: Test  (was: Bug)

> kafka/gelly pre-commit test failed due to process hang
> --
>
> Key: FLINK-20052
> URL: https://issues.apache.org/jira/browse/FLINK-20052
> Project: Flink
>  Issue Type: Test
>  Components: Library / Graph Processing (Gelly), Tests
>Affects Versions: 1.12.0
>Reporter: Yu Li
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> The kafka/gelly test failed in recent pre-commit [Azure 
> build|https://s.apache.org/flink-kafka-gelly-failure]:
> {noformat}
> Process produced no output for 900 seconds.
> ...
> ##[error]Bash exited with code '143'.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Comment Edited] (FLINK-20045) ZooKeeperLeaderElectionTest.testZooKeeperLeaderElectionRetrieval failed with "TimeoutException: Contender was not elected as the leader within 200000ms"

2020-11-08 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228330#comment-17228330
 ] 

Yang Wang edited comment on FLINK-20045 at 11/9/20, 3:12 AM:
-

I think the root cause is the leader was granted too fast and the 
{{TestingLeaderElectionEventHandler#init()}} has not been called. So we could 
find the following exception in the maven log. This could not happen in the 
production code since we have a {{lock}} in {{DefaultLeaderElectionService}}.

 

How to fix the unstable tests?

I suggest to add a "wait-with-timeout" in the 
{{TestingLeaderElectionEventHandler#onGrantLeadership, #onRevokeLeadership, 
#onLeaderInformationChange}} so that we have enough time for creating 
{{LeaderElectionDriver}} and then {{init}} the 
{{TestingLeaderElectionEventHandler}}.

 
{code:java}
10:30:37,419 [Curator-LeaderLatch-0] WARN  
org.apache.flink.shaded.curator4.org.apache.curator.utils.ZKPaths [] - The 
version of ZooKeeper being used doesn't support Container nodes. 
CreateMode.PERSISTENT will be used instead.10:30:37,419 [Curator-LeaderLatch-0] 
WARN  org.apache.flink.shaded.curator4.org.apache.curator.utils.ZKPaths [] - 
The version of ZooKeeper being used doesn't support Container nodes. 
CreateMode.PERSISTENT will be used instead.10:30:37,468 [    main-EventThread] 
ERROR 
org.apache.flink.shaded.curator4.org.apache.curator.framework.listen.ListenerContainer
 [] - Listener (ZooKeeperLeaderElectionDriver{leaderPath='/leader'}) threw an 
exceptionorg.apache.flink.util.FlinkRuntimeException: init() should be called 
first. at 
org.apache.flink.runtime.leaderelection.TestingLeaderElectionEventHandler.onGrantLeadership(TestingLeaderElectionEventHandler.java:46)
 ~[test-classes/:?] at 
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver.isLeader(ZooKeeperLeaderElectionDriver.java:158)
 ~[classes/:?] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch$9.apply(LeaderLatch.java:693)
 ~[flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch$9.apply(LeaderLatch.java:689)
 ~[flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:100)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.shaded.com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:92)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch.setLeadership(LeaderLatch.java:688)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch.checkLeadership(LeaderLatch.java:567)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch.access$700(LeaderLatch.java:65)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch$7.processResult(LeaderLatch.java:618)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl.sendToBackgroundCallback(CuratorFrameworkImpl.java:883)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:653)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.GetChildrenBuilderImpl$2.processResult(GetChildrenBuilderImpl.java:187)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:601)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:508)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0]10:33:58,379 [           
     main] INFO  
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver [] - 
Closing ZooKeeperLeaderElectionDriver{leaderPath='/leader'}10:33:58,383 [       
         main] 

[jira] [Commented] (FLINK-20045) ZooKeeperLeaderElectionTest.testZooKeeperLeaderElectionRetrieval failed with "TimeoutException: Contender was not elected as the leader within 200000ms"

2020-11-08 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228330#comment-17228330
 ] 

Yang Wang commented on FLINK-20045:
---

I think the root cause is the leader was granted too fast and the 
{{TestingLeaderElectionEventHandler#init()}} has not been called. So we could 
find the following exception in the maven log. This could not happen in the 
production code since we have a {{lock}} in {{DefaultLeaderElectionService}}.

 

How to fix the unstable tests?

I suggest to add a "wait-with-timeout" in the 
{{TestingLeaderElectionEventHandler}} so that we have enough time for creating 
{{LeaderElectionDriver}} and then {{init}} the 
{{TestingLeaderElectionEventHandler}}.

 
{code:java}
10:30:37,419 [Curator-LeaderLatch-0] WARN  
org.apache.flink.shaded.curator4.org.apache.curator.utils.ZKPaths [] - The 
version of ZooKeeper being used doesn't support Container nodes. 
CreateMode.PERSISTENT will be used instead.10:30:37,419 [Curator-LeaderLatch-0] 
WARN  org.apache.flink.shaded.curator4.org.apache.curator.utils.ZKPaths [] - 
The version of ZooKeeper being used doesn't support Container nodes. 
CreateMode.PERSISTENT will be used instead.10:30:37,468 [    main-EventThread] 
ERROR 
org.apache.flink.shaded.curator4.org.apache.curator.framework.listen.ListenerContainer
 [] - Listener (ZooKeeperLeaderElectionDriver{leaderPath='/leader'}) threw an 
exceptionorg.apache.flink.util.FlinkRuntimeException: init() should be called 
first. at 
org.apache.flink.runtime.leaderelection.TestingLeaderElectionEventHandler.onGrantLeadership(TestingLeaderElectionEventHandler.java:46)
 ~[test-classes/:?] at 
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver.isLeader(ZooKeeperLeaderElectionDriver.java:158)
 ~[classes/:?] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch$9.apply(LeaderLatch.java:693)
 ~[flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch$9.apply(LeaderLatch.java:689)
 ~[flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:100)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.shaded.com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:92)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch.setLeadership(LeaderLatch.java:688)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch.checkLeadership(LeaderLatch.java:567)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch.access$700(LeaderLatch.java:65)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.recipes.leader.LeaderLatch$7.processResult(LeaderLatch.java:618)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl.sendToBackgroundCallback(CuratorFrameworkImpl.java:883)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:653)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.WatcherRemovalFacade.processBackgroundOperation(WatcherRemovalFacade.java:152)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.curator4.org.apache.curator.framework.imps.GetChildrenBuilderImpl$2.processResult(GetChildrenBuilderImpl.java:187)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:601)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0] at 
org.apache.flink.shaded.zookeeper3.org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:508)
 [flink-shaded-zookeeper-3-3.4.14-12.0.jar:3.4.14-12.0]10:33:58,379 [           
     main] INFO  
org.apache.flink.runtime.leaderelection.ZooKeeperLeaderElectionDriver [] - 
Closing ZooKeeperLeaderElectionDriver{leaderPath='/leader'}10:33:58,383 [       
         main] INFO  
org.apache.flink.runtime.leaderretrieval.ZooKeeperLeaderRetrievalDriver [] - 
Closing 

[GitHub] [flink] flinkbot commented on pull request #13990: [FLINK-20053][table][doc] Add document for file compaction

2020-11-08 Thread GitBox


flinkbot commented on pull request #13990:
URL: https://github.com/apache/flink/pull/13990#issuecomment-723725737


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit bcabafeacefca7370b3c2f3569ff8608704a282b (Mon Nov 09 
02:58:14 UTC 2020)
   
   **Warnings:**
* Documentation files were touched, but no `.zh.md` files: Update Chinese 
documentation or file Jira ticket.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi opened a new pull request #13990: [FLINK-20053][table][doc] Add document for file compaction

2020-11-08 Thread GitBox


JingsongLi opened a new pull request #13990:
URL: https://github.com/apache/flink/pull/13990


   
   ## What is the purpose of the change
   
   Add document for file compaction
   
   ## Brief change log
   
   - In Flink FileSystem document, Add File Compaction
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20053) Add document for file compaction

2020-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20053?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-20053:
---
Labels: pull-request-available  (was: )

> Add document for file compaction
> 
>
> Key: FLINK-20053
> URL: https://issues.apache.org/jira/browse/FLINK-20053
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19717) SourceReaderBase.pollNext may return END_OF_INPUT if SplitReader.fetch throws

2020-11-08 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228328#comment-17228328
 ] 

Jiangjie Qin commented on FLINK-19717:
--

[~sewen] I have merged the patch to master. But cherry-picking the patch to 
release-1.11 has some conflicts. Do you want to sequentialize the backporting 
of this patch with other Source patches? Otherwise I can also rebase the patch 
on release-1.11.

> SourceReaderBase.pollNext may return END_OF_INPUT if SplitReader.fetch throws
> -
>
> Key: FLINK-19717
> URL: https://issues.apache.org/jira/browse/FLINK-19717
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Kezhu Wang
>Assignee: Kezhu Wang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.12.0, 1.11.3
>
>
> Here are my imaginative execution flows:
> 1. In mailbox thread, we enters {{SourceReaderBase.getNextFetch}}. After 
> executes {{splitFetcherManager.checkErrors()}} but before 
> {{elementsQueue.poll()}}, {{SplitFetcher}} gets its chance to run.
> 2. {{SplitFetcher}} terminates due to exception from {{SplitReader.fetch}}. 
> {{SplitFetcher.shutdownHook}} will removes this exceptional fetcher from 
> {{SplitFetcherManager}}.
> 3. In mailbox thread,  {{elementsQueue.poll()}} executes. If there is no 
> elements in queue, {{elementsQueue}} will be reset to unavailable.
> 4. After getting no elements from {{SourceReaderBase.getNextFetch}}, we will 
> enter {{SourceReaderBase.finishedOrAvailableLater}}. If the exceptional 
> fetcher is last alive fetcher, then 
> {{SourceReaderBase.finishedOrAvailableLater}} may evaluate to 
> {{InputStatus.END_OF_INPUT}}
> 5. {{StreamTask}} will terminate itself due to {{InputStatus.END_OF_INPUT}}.
> Here is revised {{SourceReaderBaseTest.testExceptionInSplitReader}} which 
> will fails in rate about 1/2.
> {code:java}
>   @Test
>   public void testExceptionInSplitReader() throws Exception {
>   expectedException.expect(RuntimeException.class);
>   expectedException.expectMessage("One or more fetchers have 
> encountered exception");
>   final String errMsg = "Testing Exception";
>   FutureCompletingBlockingQueue> 
> elementsQueue =
>   new FutureCompletingBlockingQueue<>();
>   // We have to handle split changes first, otherwise fetch will 
> not be called.
>   try (MockSourceReader reader = new MockSourceReader(
>   elementsQueue,
>   () -> new SplitReader() {
>   @Override
>   public RecordsWithSplitIds fetch() {
>   throw new RuntimeException(errMsg);
>   }
>   @Override
>   public void 
> handleSplitsChanges(SplitsChange splitsChanges) {}
>   @Override
>   public void wakeUp() {
>   }
>   },
>   getConfig(),
>   null)) {
>   ValidatingSourceOutput output = new 
> ValidatingSourceOutput();
>   reader.addSplits(Collections.singletonList(getSplit(0,
>   NUM_RECORDS_PER_SPLIT,
>   Boundedness.CONTINUOUS_UNBOUNDED)));
>   reader.handleSourceEvents(new NoMoreSplitsEvent());
>   // This is not a real infinite loop, it is supposed to 
> throw exception after some polls.
>   while (true) {
>   InputStatus inputStatus = 
> reader.pollNext(output);
>   assertNotEquals(InputStatus.END_OF_INPUT, 
> inputStatus);
>   // Add a sleep to avoid tight loop.
>   Thread.sleep(0);
>   }
>   }
>   }
> {code}
> This revised {{SourceReaderBaseTest.testExceptionInSplitReader}} differs from 
> existing one in three places:
>  1. {{reader.handleSourceEvents(new NoMoreSplitsEvent())}} sets 
> {{SourceReaderBase.noMoreSplitsAssignment}} to true.
>  2. Add assertion to assert that {{reader.pollNext}} will not return 
> {{InputStatus.END_OF_INPUT}}.
>  3. Modify {{Thread.sleep(1)}} to {{Thread.sleep(0)}} to increase failure 
> rate from 1/200 to 1/2.
> See [FLINK-19448|https://issues.apache.org/jira/browse/FLINK-19448] for 
> initial discussion.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13606: [FLINK-19554][e2e] Implement a unified testing framework for connectors

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13606:
URL: https://github.com/apache/flink/pull/13606#issuecomment-707535989


   
   ## CI report:
   
   * 8063b2a34512c8d8e99cdba06a1872d9786832bb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9199)
 
   * 7f5361cb816e2fd6e3abad7adcf9caaabd126d2e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9327)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-19238) RocksDB performance issue with low managed memory and high parallelism

2020-11-08 Thread Yu Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li resolved FLINK-19238.
---
Fix Version/s: 1.12.0
   Resolution: Implemented

Merged into master via 93c6256aace5004719241c64d320de5a51f7ec5c

> RocksDB performance issue with low managed memory and high parallelism
> --
>
> Key: FLINK-19238
> URL: https://issues.apache.org/jira/browse/FLINK-19238
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends
>Affects Versions: 1.11.1
>Reporter: Juha Mynttinen
>Assignee: Juha Mynttinen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> h2. The issue
> When using {{RocksDBStateBackend}}, it's possible to configure RocksDB so 
> that it almost constantly flushes the active memtable, causing high IO and 
> CPU usage.
> This happens because this check will be true essentially always 
> [https://github.com/dataArtisans/frocksdb/blob/49bc897d5d768026f1eb816d960c1f2383396ef4/include/rocksdb/write_buffer_manager.h#L47].
> h2. Reproducing the issue
> To reproduce the issue, the following needs to happen:
>  * Use RocksDB state backend
>  * Use managed memory
>  * have "low" managed memory size
>  * have "high" parallelism (e.g. 5) OR have enough operators (the exact count 
> unknown)
> The easiest way to do all this is to do 
> {{StreamExecutionEnvironment.createLocalEnvironment}} and creating a simple 
> Flink job and setting the parallelism "high enough". Nothing else is needed.
> h2. Background
> Arena memory block size is by default 1/8 of the memtable size 
> [https://github.com/dataArtisans/frocksdb/blob/49bc897d5d768026f1eb816d960c1f2383396ef4/db/column_family.cc#L196].
>  When the memtable has any data, it'll consume one arena block. The arena 
> block size will be higher the "mutable limit". The mutable limit is 
> calculated from the shared write buffer manager size. Having low managed 
> memory and high parallelism pushes the mutable limit to a too low value.
> h2. Documentation
> In docs 
> ([https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/large_state_tuning.html):]
>  
>   
>  "An advanced option (expert mode) to reduce the number of MemTable flushes 
> in setups with many states, is to tune RocksDB’s ColumnFamily options (arena 
> block size, max background flush threads, etc.) via a RocksDBOptionsFactory". 
>   
>  This snippet in the docs is probably talking about the issue I'm witnessing. 
> I think there are two issues here:
>   
>  1) it's hard/impossible to know what kind of performance one can expect from 
> a Flink application. Thus, it's hard to know if one is suffering from e.g. 
> from this performance issue, or if the system is performing normally (and 
> inherently being slow).
>  2) even if one suspects a performance issue, it's very hard to find the root 
> cause of the performance issue (memtable flush happening frequently). To find 
> out this one would need to know what's the normal flush frequency.
>   
>  Also the doc says "in setups with many states". The same problem is hit when 
> using just one state, but "high" parallelism (5).
>   
>  If the arena block size _ever_ needs  to be configured only to "fix" this 
> issue, it'd be best if there _never_ was a need to modify arena block size. 
> What if we forget even mentioning arena block size in the docs and focus on 
> the managed memory size, since managed memory size is something the user does 
> tune.
> h1. The proposed fix
> The proposed fix is to log the issue on WARN level and tell the user clearly 
> what is happening and how to fix.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] carp84 commented on pull request #13977: [Mirror] [FLINK-19238] Sanity check for RocksDB arena block size

2020-11-08 Thread GitBox


carp84 commented on pull request #13977:
URL: https://github.com/apache/flink/pull/13977#issuecomment-723723365


   Merged via 93c6256



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] carp84 closed pull request #13977: [Mirror] [FLINK-19238] Sanity check for RocksDB arena block size

2020-11-08 Thread GitBox


carp84 closed pull request #13977:
URL: https://github.com/apache/flink/pull/13977


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] carp84 closed pull request #13688: [FLINK-19238] Arena block size sanity

2020-11-08 Thread GitBox


carp84 closed pull request #13688:
URL: https://github.com/apache/flink/pull/13688


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on a change in pull request #13986: [FLINK-20016][python] Support TimestampAssigner and WatermarkGenerator for Python DataStream API.

2020-11-08 Thread GitBox


SteNicholas commented on a change in pull request #13986:
URL: https://github.com/apache/flink/pull/13986#discussion_r519525532



##
File path: 
flink-python/src/main/java/org/apache/flink/streaming/api/operators/python/TimestampsAndWatermarksOperator.java
##
@@ -0,0 +1,251 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators.python;
+
+import org.apache.flink.api.common.eventtime.Watermark;
+import org.apache.flink.api.common.eventtime.WatermarkGenerator;
+import org.apache.flink.api.common.eventtime.WatermarkOutput;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.core.memory.ManagedMemoryUseCase;
+import org.apache.flink.python.PythonFunctionRunner;
+import 
org.apache.flink.streaming.api.functions.python.DataStreamPythonFunctionInfo;
+import org.apache.flink.streaming.api.operators.ChainingStrategy;
+import org.apache.flink.streaming.api.operators.Output;
+import 
org.apache.flink.streaming.api.runners.python.beam.BeamDataStreamPythonFunctionRunner;
+import org.apache.flink.streaming.api.utils.PythonOperatorUtils;
+import org.apache.flink.streaming.api.utils.PythonTypeUtils;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.streaming.runtime.streamstatus.StreamStatus;
+import org.apache.flink.streaming.runtime.streamstatus.StreamStatusMaintainer;
+import org.apache.flink.streaming.runtime.tasks.ProcessingTimeCallback;
+import org.apache.flink.types.Row;
+
+import java.util.Collections;
+
+/**
+ * A stream operator that may do one or both of the following: extract 
timestamps from
+ * events and generate watermarks by user specify TimestampAssigner and 
WatermarkStrategy.
+ *
+ * These two responsibilities run in the same operator rather than in two 
different ones,
+ * because the implementation of the timestamp assigner and the watermark 
generator is
+ * frequently in the same class (and should be run in the same instance), even 
though the
+ * separate interfaces support the use of different classes.
+ *
+ * @param  The type of the input elements
+ */
+public class TimestampsAndWatermarksOperator extends 
StatelessOneInputPythonFunctionOperator
+   implements ProcessingTimeCallback {
+
+   private static final long serialVersionUID = 1L;
+
+   /**
+* A user specified watermarkStrategy.
+*/
+   private final WatermarkStrategy watermarkStrategy;
+
+   /**
+* The TypeInformation of python worker input data.
+*/
+   private final TypeInformation runnerInputTypeInfo;
+
+   /**
+* The TypeInformation of python worker output data.
+*/
+   private final TypeInformation runnerOutputTypeInfo;
+
+   /**
+* Serializer to serialize input data for python worker.
+*/
+   private transient TypeSerializer runnerInputSerializer;
+
+   /**
+* Serializer to deserialize output data from python worker.
+*/
+   private transient TypeSerializer runnerOutputSerializer;
+
+   /** The watermark generator, initialized during runtime. */
+   private transient WatermarkGenerator watermarkGenerator;
+
+   /** The watermark output gateway, initialized during runtime. */
+   private transient WatermarkOutput watermarkOutput;
+
+   /** The interval (in milliseconds) for periodic watermark probes. 
Initialized during runtime. */
+   private transient long watermarkInterval;
+
+   /**
+* Reusable row for normal data runner inputs.
+*/
+   private transient Row resuableInput;
+
+   /**
+* Reusable StreamRecord for data with new timestamp calculated in 
TimestampAssigner.
+*/
+   private transient StreamRecord reusableStreamRecord;
+

Review comment:
   This misses the `emitProgressiveWatermarks` flag, which is whether to 
emit intermediate watermarks or 

[GitHub] [flink] becketqin commented on pull request #13776: [FLINK-19717][connectors/common] Fix spurious InputStatus.END_OF_INPUT from SourceReaderBase.pollNext caused by split reader exception

2020-11-08 Thread GitBox


becketqin commented on pull request #13776:
URL: https://github.com/apache/flink/pull/13776#issuecomment-723723065


   Merged to master:
   2cce1aced0d6a311ff0803b773f1565e7f9d76fc



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin merged pull request #13776: [FLINK-19717][connectors/common] Fix spurious InputStatus.END_OF_INPUT from SourceReaderBase.pollNext caused by split reader exception

2020-11-08 Thread GitBox


becketqin merged pull request #13776:
URL: https://github.com/apache/flink/pull/13776


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-20006) FileSinkITCase.testFileSink: The record 0 should occur 4 times, but only occurs 8time expected:<4> but was:<8>

2020-11-08 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228325#comment-17228325
 ] 

Yun Gao commented on FLINK-20006:
-

The latter issue should also get fixed now.

> FileSinkITCase.testFileSink: The record 0 should occur 4 times,  but only 
> occurs 8time expected:<4> but was:<8>
> ---
>
> Key: FLINK-20006
> URL: https://issues.apache.org/jira/browse/FLINK-20006
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.12.0
>Reporter: Robert Metzger
>Assignee: Yun Gao
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9082=logs=fc5181b0-e452-5c8f-68de-1097947f6483=62110053-334f-5295-a0ab-80dd7e2babbf
> {code}
> 2020-11-05T13:31:16.7006473Z [ERROR] Tests run: 4, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 5.565 s <<< FAILURE! - in 
> org.apache.flink.connector.file.sink.FileSinkITCase
> 2020-11-05T13:31:16.7007237Z [ERROR] testFileSink[executionMode = STREAMING, 
> triggerFailover = true](org.apache.flink.connector.file.sink.FileSinkITCase)  
> Time elapsed: 0.548 s  <<< FAILURE!
> 2020-11-05T13:31:16.7007897Z java.lang.AssertionError: The record 0 should 
> occur 4 times,  but only occurs 8time expected:<4> but was:<8>
> 2020-11-05T13:31:16.7008317Z  at org.junit.Assert.fail(Assert.java:88)
> 2020-11-05T13:31:16.7008644Z  at 
> org.junit.Assert.failNotEquals(Assert.java:834)
> 2020-11-05T13:31:16.7008987Z  at 
> org.junit.Assert.assertEquals(Assert.java:645)
> 2020-11-05T13:31:16.7009392Z  at 
> org.apache.flink.connector.file.sink.FileSinkITCase.checkResult(FileSinkITCase.java:218)
> 2020-11-05T13:31:16.7009889Z  at 
> org.apache.flink.connector.file.sink.FileSinkITCase.testFileSink(FileSinkITCase.java:132)
> 2020-11-05T13:31:16.7010316Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin commented on a change in pull request #13776: [FLINK-19717][connectors/common] Fix spurious InputStatus.END_OF_INPUT from SourceReaderBase.pollNext caused by split reader ex

2020-11-08 Thread GitBox


becketqin commented on a change in pull request #13776:
URL: https://github.com/apache/flink/pull/13776#discussion_r519524957



##
File path: 
flink-connectors/flink-connector-base/src/main/java/org/apache/flink/connector/base/source/reader/fetcher/SplitFetcherManager.java
##
@@ -112,7 +111,7 @@ public void accept(Throwable t) {
public abstract void addSplits(List splitsToAdd);
 
protected void startFetcher(SplitFetcher fetcher) {
-   executors.submit(new ThrowableCatchingRunnable(errorHandler, 
fetcher));
+   executors.submit(fetcher);

Review comment:
   @kezhuw Thanks for the explanation. Good point.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on a change in pull request #13986: [FLINK-20016][python] Support TimestampAssigner and WatermarkGenerator for Python DataStream API.

2020-11-08 Thread GitBox


dianfu commented on a change in pull request #13986:
URL: https://github.com/apache/flink/pull/13986#discussion_r519516510



##
File path: flink-python/pyflink/datastream/tests/test_data_stream.py
##
@@ -601,6 +602,50 @@ def test_basic_array_type_info(self):
 expected.sort()
 self.assertEqual(expected, results)
 
+def test_timestamp_assigner_and_watermark_strategy(self):
+self.env.set_parallelism(1)
+self.env.set_stream_time_characteristic(TimeCharacteristic.EventTime)
+data_stream = self.env.from_collection([(1, '1603708211000'),
+(2, '1603708224000'),
+(3, '1603708226000'),
+(4, '1603708289000')],
+   
type_info=Types.ROW([Types.INT(), Types.STRING()]))
+
+class MyTimestampAssigner(TimestampAssigner):
+
+def extract_timestamp(self, value, previous) -> int:
+return int(value[1])
+
+class MyProcessFunction(ProcessFunction):
+
+def process_element(self, value, ctx, out):
+current_timestamp = ctx.timestamp()
+current_watermark = ctx.timer_service().current_watermark()
+out.collect("current timestamp: {}, current watermark: {}, 
current_value: {}"
+.format(str(current_timestamp), 
str(current_watermark), str(value)))
+
+def on_timer(self, timestamp, ctx, out):
+pass
+
+watermark_strategy = WatermarkStrategy.for_monotonous_timestamps()\
+.with_timestamp_assigner(MyTimestampAssigner())
+data_stream.assign_timestamps_and_watermarks(watermark_strategy)\
+.key_by(lambda x: x[0], key_type_info=Types.INT()) \
+.process(MyProcessFunction(), 
output_type=Types.STRING()).add_sink(self.test_sink)
+self.env.execute('test time stamp assigner')
+result = self.test_sink.get_results()
+expeected_result = ["current timestamp: None, current watermark: 
9223372036854775807, "

Review comment:
   ```suggestion
   expected_result = ["current timestamp: None, current watermark: 
9223372036854775807, "
   ```

##
File path: 
flink-python/src/main/java/org/apache/flink/streaming/api/operators/python/TimestampsAndWatermarksOperator.java
##
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.api.operators.python;
+
+import org.apache.flink.api.common.eventtime.Watermark;
+import org.apache.flink.api.common.eventtime.WatermarkGenerator;
+import org.apache.flink.api.common.eventtime.WatermarkOutput;
+import org.apache.flink.api.common.eventtime.WatermarkStrategy;
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.common.typeinfo.Types;
+import org.apache.flink.api.common.typeutils.TypeSerializer;
+import org.apache.flink.api.java.tuple.Tuple2;
+import org.apache.flink.configuration.Configuration;
+import org.apache.flink.core.memory.ManagedMemoryUseCase;
+import org.apache.flink.python.PythonFunctionRunner;
+import 
org.apache.flink.streaming.api.functions.python.DataStreamPythonFunctionInfo;
+import org.apache.flink.streaming.api.operators.ChainingStrategy;
+import org.apache.flink.streaming.api.operators.Output;
+import 
org.apache.flink.streaming.api.runners.python.beam.BeamDataStreamPythonFunctionRunner;
+import org.apache.flink.streaming.api.utils.PythonOperatorUtils;
+import org.apache.flink.streaming.api.utils.PythonTypeUtils;
+import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
+import org.apache.flink.streaming.runtime.streamstatus.StreamStatus;
+import org.apache.flink.streaming.runtime.streamstatus.StreamStatusMaintainer;
+import org.apache.flink.streaming.runtime.tasks.ProcessingTimeCallback;
+import org.apache.flink.types.Row;
+
+import java.util.Collections;
+
+/**
+ * A stream operator that may do one or both of the following: extract 
timestamps from
+ * events and generate watermarks by user specify TimestampAssigner and 
WatermarkStrategy.
+ *
+ * 

[GitHub] [flink] gaoyunhaii closed pull request #13973: [FLINK-19850] Add e2e tests for the new File Sink in the streaming mode

2020-11-08 Thread GitBox


gaoyunhaii closed pull request #13973:
URL: https://github.com/apache/flink/pull/13973


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-18789) Use TableEnvironment#executeSql to execute insert statement in sql client

2020-11-08 Thread godfrey he (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18789?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

godfrey he reassigned FLINK-18789:
--

Assignee: godfrey he

> Use TableEnvironment#executeSql to execute insert statement in sql client
> -
>
> Key: FLINK-18789
> URL: https://issues.apache.org/jira/browse/FLINK-18789
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Currently, sql client has a lot of logic to execute an insert job, which can 
> be simplified through executeSql method.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20053) Add document for file compaction

2020-11-08 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-20053:


 Summary: Add document for file compaction
 Key: FLINK-20053
 URL: https://issues.apache.org/jira/browse/FLINK-20053
 Project: Flink
  Issue Type: Sub-task
  Components: Documentation, Table SQL / API
Reporter: Jingsong Lee
Assignee: Jingsong Lee
 Fix For: 1.12.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #13986: [FLINK-20016][python] Support TimestampAssigner and WatermarkGenerator for Python DataStream API.

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13986:
URL: https://github.com/apache/flink/pull/13986#issuecomment-723648682


   
   ## CI report:
   
   * 6afd298cd5356502822e36537d7a1c8ba67052b5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9319)
 
   * 07eebefc93a1ea2a4cc90ece59125a3b82970be8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9326)
 
   * 80a45e8f1c6942ab89096b1bc277ae3990ffce4d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13606: [FLINK-19554][e2e] Implement a unified testing framework for connectors

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13606:
URL: https://github.com/apache/flink/pull/13606#issuecomment-707535989


   
   ## CI report:
   
   * 8063b2a34512c8d8e99cdba06a1872d9786832bb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9199)
 
   * 7f5361cb816e2fd6e3abad7adcf9caaabd126d2e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] carp84 commented on pull request #13977: [Mirror] [FLINK-19238] Sanity check for RocksDB arena block size

2020-11-08 Thread GitBox


carp84 commented on pull request #13977:
URL: https://github.com/apache/flink/pull/13977#issuecomment-723715835


   The latest Azure build failed due to FLINK-20052, and comparing the two 
builds we could make sure the failure is irrelative to changes here, thus I 
will merge the commits soon.
   
   @rmetzger @dianfu JFYI (for 1.12.0 release).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13989: [FLINK-19448][connector/common] Synchronize fetchers.isEmpty status to SourceReaderBase using elementsQueue.notifyAvailable()

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13989:
URL: https://github.com/apache/flink/pull/13989#issuecomment-723709607


   
   ## CI report:
   
   * 5da43fd50d04596217e89f547f97dbb4711ca7ee Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9325)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #13986: [FLINK-20016][python] Support TimestampAssigner and WatermarkGenerator for Python DataStream API.

2020-11-08 Thread GitBox


flinkbot edited a comment on pull request #13986:
URL: https://github.com/apache/flink/pull/13986#issuecomment-723648682


   
   ## CI report:
   
   * 6afd298cd5356502822e36537d7a1c8ba67052b5 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=9319)
 
   * 07eebefc93a1ea2a4cc90ece59125a3b82970be8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-20052) kafka/gelly pre-commit test failed due to process hang

2020-11-08 Thread Yu Li (Jira)
Yu Li created FLINK-20052:
-

 Summary: kafka/gelly pre-commit test failed due to process hang
 Key: FLINK-20052
 URL: https://issues.apache.org/jira/browse/FLINK-20052
 Project: Flink
  Issue Type: Bug
  Components: Library / Graph Processing (Gelly), Tests
Affects Versions: 1.12.0
Reporter: Yu Li
 Fix For: 1.12.0


The kafka/gelly test failed in recent pre-commit [Azure 
build|https://s.apache.org/flink-kafka-gelly-failure]:
{noformat}
Process produced no output for 900 seconds.
...
##[error]Bash exited with code '143'.
{noformat}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-20044) Disposal of RocksDB could last forever

2020-11-08 Thread Jiayi Liao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228315#comment-17228315
 ] 

Jiayi Liao commented on FLINK-20044:


[~sewen] I guess this is not a cancellation situation here because I didn't see 
any cancellation logs on TaskExecutor from the context. I think the task throws 
an exception and in the try-finally code block in {{StreamTask}}, the thread 
hangs on {{disposeAllOperators}}. And we also cannot observe the exception 
because the exception is printed on {{Task}} , which is executed after the 
try-finally code block in {{StreamTask}}.

> Disposal of RocksDB could last forever
> --
>
> Key: FLINK-20044
> URL: https://issues.apache.org/jira/browse/FLINK-20044
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / State Backends
>Affects Versions: 1.9.0
>Reporter: Jiayi Liao
>Priority: Major
>
> The task cannot fail itself because it's stuck on the disposal of RocksDB, 
> which also affects the job. I saw this for several times in recent months, 
> most of the errors come from the broken disk. But I think we should also do 
> something to deal with it more elegantly from Flink's perspective.
> {code:java}
> "LookUp_Join -> Sink_Unnamed (898/1777)- execution # 4" #411 prio=5 os_prio=0 
> tid=0x7fc9b0286800 nid=0xff6fc runnable [0x7fc966cfc000]
>java.lang.Thread.State: RUNNABLE
> at org.rocksdb.RocksDB.disposeInternal(Native Method)
> at org.rocksdb.RocksObject.disposeInternal(RocksObject.java:37)
> at 
> org.rocksdb.AbstractImmutableNativeReference.close(AbstractImmutableNativeReference.java:57)
> at org.apache.flink.util.IOUtils.closeQuietly(IOUtils.java:263)
> at 
> org.apache.flink.contrib.streaming.state.RocksDBKeyedStateBackend.dispose(RocksDBKeyedStateBackend.java:349)
> at 
> org.apache.flink.streaming.api.operators.AbstractStreamOperator.dispose(AbstractStreamOperator.java:371)
> at 
> org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.dispose(AbstractUdfStreamOperator.java:124)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.disposeAllOperators(StreamTask.java:618)
> at 
> org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:517)
> at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:733)
> at org.apache.flink.runtime.taskmanager.Task.run(Task.java:539)
> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 commented on pull request #13664: [FLINK-19673] Translate "Standalone Cluster" page into Chinese

2020-11-08 Thread GitBox


klion26 commented on pull request #13664:
URL: https://github.com/apache/flink/pull/13664#issuecomment-723712163


   Will merge this if no objection occurred in the next few days.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] klion26 commented on pull request #388: Add congxian qiu to community page

2020-11-08 Thread GitBox


klion26 commented on pull request #388:
URL: https://github.com/apache/flink-web/pull/388#issuecomment-723712223


   @rmetzger thanks for the review, will merge this and regenerate the content.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on pull request #13750: [FLINK-19394][docs-zh] Translate the 'Monitoring Checkpointing' page of 'Debugging & Monitoring' into Chinese

2020-11-08 Thread GitBox


klion26 commented on pull request #13750:
URL: https://github.com/apache/flink/pull/13750#issuecomment-723711963


   @dianfu thanks for the review, will merge this one soon.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-19973) 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: yarn-per-job` config

2020-11-08 Thread zhisheng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228314#comment-17228314
 ] 

zhisheng commented on FLINK-19973:
--

[~kkl0u] closed

> 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: 
> yarn-per-job` config
> --
>
> Key: FLINK-19973
> URL: https://issues.apache.org/jira/browse/FLINK-19973
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Assignee: Kostas Kloudas
>Priority: Major
> Attachments: image-2020-11-04-20-58-49-738.png, 
> image-2020-11-04-21-00-06-180.png
>
>
> when i use flink-sql-client to deploy job to yarn(per job mod), I set 
> `execution.target: yarn-per-job` in flink-conf.yaml, job will deploy to yarn.
>  
> when I deploy jar job to yarn, The command is `./bin/flink run -m 
> yarn-cluster -ynm flink-1.12-test  -ytm 3g -yjm 3g 
> examples/streaming/StateMachineExample.jar`, it will deploy ok, but the 
> `-ynm`、`-ytm 3g` and `-yjm 3g` doesn't work. 
>  
> !image-2020-11-04-20-58-49-738.png|width=912,height=235!
>  
>  
> when i remove the config `execution.target: yarn-per-job`, it work well.
>  
> !image-2020-11-04-21-00-06-180.png|width=1047,height=150!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-19973) 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: yarn-per-job` config

2020-11-08 Thread zhisheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-19973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhisheng closed FLINK-19973.

Resolution: Not A Bug

> 【Flink-Deployment】YARN CLI Parameter doesn't work when set `execution.target: 
> yarn-per-job` config
> --
>
> Key: FLINK-19973
> URL: https://issues.apache.org/jira/browse/FLINK-19973
> Project: Flink
>  Issue Type: Bug
>  Components: Command Line Client, Deployment / YARN
>Affects Versions: 1.12.0
>Reporter: zhisheng
>Assignee: Kostas Kloudas
>Priority: Major
> Attachments: image-2020-11-04-20-58-49-738.png, 
> image-2020-11-04-21-00-06-180.png
>
>
> when i use flink-sql-client to deploy job to yarn(per job mod), I set 
> `execution.target: yarn-per-job` in flink-conf.yaml, job will deploy to yarn.
>  
> when I deploy jar job to yarn, The command is `./bin/flink run -m 
> yarn-cluster -ynm flink-1.12-test  -ytm 3g -yjm 3g 
> examples/streaming/StateMachineExample.jar`, it will deploy ok, but the 
> `-ynm`、`-ytm 3g` and `-yjm 3g` doesn't work. 
>  
> !image-2020-11-04-20-58-49-738.png|width=912,height=235!
>  
>  
> when i remove the config `execution.target: yarn-per-job`, it work well.
>  
> !image-2020-11-04-21-00-06-180.png|width=1047,height=150!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #13989: [FLINK-19448][connector/common] Synchronize fetchers.isEmpty status to SourceReaderBase using elementsQueue.notifyAvailable()

2020-11-08 Thread GitBox


flinkbot commented on pull request #13989:
URL: https://github.com/apache/flink/pull/13989#issuecomment-723709607


   
   ## CI report:
   
   * 5da43fd50d04596217e89f547f97dbb4711ca7ee UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-20051) SourceReaderTestBase.testAddSplitToExistingFetcher failed with NullPointerException

2020-11-08 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20051:

Labels: test-stability  (was: )

> SourceReaderTestBase.testAddSplitToExistingFetcher failed with 
> NullPointerException
> ---
>
> Key: FLINK-20051
> URL: https://issues.apache.org/jira/browse/FLINK-20051
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9322=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518
> {code}
> 2020-11-08T21:49:29.6792941Z [ERROR] 
> testAddSplitToExistingFetcher(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
>   Time elapsed: 0.632 s  <<< ERROR!
> 2020-11-08T21:49:29.6793408Z java.lang.NullPointerException
> 2020-11-08T21:49:29.6793998Z  at 
> org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader$KafkaPartitionSplitRecords.nextSplit(KafkaPartitionSplitReader.java:363)
> 2020-11-08T21:49:29.6795970Z  at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.moveToNextSplit(SourceReaderBase.java:187)
> 2020-11-08T21:49:29.6796596Z  at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:159)
> 2020-11-08T21:49:29.6797317Z  at 
> org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
> 2020-11-08T21:49:29.6797942Z  at 
> org.apache.flink.connector.testutils.source.reader.SourceReaderTestBase.testAddSplitToExistingFetcher(SourceReaderTestBase.java:98)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20051) SourceReaderTestBase.testAddSplitToExistingFetcher failed with NullPointerException

2020-11-08 Thread Dian Fu (Jira)
Dian Fu created FLINK-20051:
---

 Summary: SourceReaderTestBase.testAddSplitToExistingFetcher failed 
with NullPointerException
 Key: FLINK-20051
 URL: https://issues.apache.org/jira/browse/FLINK-20051
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common
Affects Versions: 1.12.0
Reporter: Dian Fu
 Fix For: 1.12.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9322=logs=4be4ed2b-549a-533d-aa33-09e28e360cc8=0db94045-2aa0-53fa-f444-0130d6933518

{code}
2020-11-08T21:49:29.6792941Z [ERROR] 
testAddSplitToExistingFetcher(org.apache.flink.connector.kafka.source.reader.KafkaSourceReaderTest)
  Time elapsed: 0.632 s  <<< ERROR!
2020-11-08T21:49:29.6793408Z java.lang.NullPointerException
2020-11-08T21:49:29.6793998Zat 
org.apache.flink.connector.kafka.source.reader.KafkaPartitionSplitReader$KafkaPartitionSplitRecords.nextSplit(KafkaPartitionSplitReader.java:363)
2020-11-08T21:49:29.6795970Zat 
org.apache.flink.connector.base.source.reader.SourceReaderBase.moveToNextSplit(SourceReaderBase.java:187)
2020-11-08T21:49:29.6796596Zat 
org.apache.flink.connector.base.source.reader.SourceReaderBase.getNextFetch(SourceReaderBase.java:159)
2020-11-08T21:49:29.6797317Zat 
org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:116)
2020-11-08T21:49:29.6797942Zat 
org.apache.flink.connector.testutils.source.reader.SourceReaderTestBase.testAddSplitToExistingFetcher(SourceReaderTestBase.java:98)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-20050) SourceCoordinatorProviderTest.testCheckpointAndReset failed with NullPointerException

2020-11-08 Thread Dian Fu (Jira)
Dian Fu created FLINK-20050:
---

 Summary: SourceCoordinatorProviderTest.testCheckpointAndReset 
failed with NullPointerException
 Key: FLINK-20050
 URL: https://issues.apache.org/jira/browse/FLINK-20050
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Common
Affects Versions: 1.12.0
Reporter: Dian Fu
 Fix For: 1.12.0


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9322=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392

{code}
2020-11-08T22:24:39.5642544Z [ERROR] 
testCheckpointAndReset(org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest)
  Time elapsed: 0.954 s  <<< ERROR!
2020-11-08T22:24:39.5643055Z java.lang.NullPointerException
2020-11-08T22:24:39.5643578Zat 
org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest.testCheckpointAndReset(SourceCoordinatorProviderTest.java:94)
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20050) SourceCoordinatorProviderTest.testCheckpointAndReset failed with NullPointerException

2020-11-08 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20050:

Labels: test-stability  (was: )

> SourceCoordinatorProviderTest.testCheckpointAndReset failed with 
> NullPointerException
> -
>
> Key: FLINK-20050
> URL: https://issues.apache.org/jira/browse/FLINK-20050
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Common
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9322=logs=77a9d8e1-d610-59b3-fc2a-4766541e0e33=7c61167f-30b3-5893-cc38-a9e3d057e392
> {code}
> 2020-11-08T22:24:39.5642544Z [ERROR] 
> testCheckpointAndReset(org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest)
>   Time elapsed: 0.954 s  <<< ERROR!
> 2020-11-08T22:24:39.5643055Z java.lang.NullPointerException
> 2020-11-08T22:24:39.5643578Z  at 
> org.apache.flink.runtime.source.coordinator.SourceCoordinatorProviderTest.testCheckpointAndReset(SourceCoordinatorProviderTest.java:94)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19863) SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process failed due to timeout"

2020-11-08 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228309#comment-17228309
 ] 

Dian Fu commented on FLINK-19863:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9322=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6

> SQLClientHBaseITCase.testHBase failed with "java.io.IOException: Process 
> failed due to timeout"
> ---
>
> Key: FLINK-19863
> URL: https://issues.apache.org/jira/browse/FLINK-19863
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=8541=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a=3425d8ba-5f03-540a-c64b-51b8481bf7d6
> {code}
> 00:50:02,589 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,106 [main] INFO  
> org.apache.flink.tests.util.flink.FlinkDistribution  [] - Stopping 
> Flink cluster.
> 00:50:04,741 [main] INFO  
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource [] - Backed up 
> logs to 
> /home/vsts/work/1/s/flink-end-to-end-tests/artifacts/flink-b3924665-1ac9-4309-8255-20f0dc94e7b9.
> 00:50:04,788 [main] INFO  
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource [] - Stopping 
> HBase Cluster
> 00:50:16,243 [main] ERROR 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase   [] - 
> 
> Test testHBase[0: 
> hbase-version:1.4.3](org.apache.flink.tests.util.hbase.SQLClientHBaseITCase) 
> failed with:
> java.io.IOException: Process failed due to timeout.
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:130)
>   at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
>   at 
> org.apache.flink.tests.util.flink.FlinkDistribution.submitSQLJob(FlinkDistribution.java:221)
>   at 
> org.apache.flink.tests.util.flink.LocalStandaloneFlinkResource$StandaloneClusterController.submitSQLJob(LocalStandaloneFlinkResource.java:196)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.executeSqlStatements(SQLClientHBaseITCase.java:215)
>   at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:152)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-19882) E2E: SQLClientHBaseITCase crash

2020-11-08 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-19882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228307#comment-17228307
 ] 

Dian Fu commented on FLINK-19882:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9322=logs=739e6eac-8312-5d31-d437-294c4d26fced=a68b8d89-50e9-5977-4500-f4fde4f57f9b

> E2E: SQLClientHBaseITCase crash
> ---
>
> Key: FLINK-19882
> URL: https://issues.apache.org/jira/browse/FLINK-19882
> Project: Flink
>  Issue Type: Test
>  Components: Connectors / HBase
>Reporter: Jingsong Lee
>Assignee: Robert Metzger
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> INSTANCE: 
> [https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/8563/logs/141]
> {code:java}
> 2020-10-29T09:43:24.0088180Z [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.22.1:test (end-to-end-tests) 
> on project flink-end-to-end-tests-hbase: There are test failures.
> 2020-10-29T09:43:24.0088792Z [ERROR] 
> 2020-10-29T09:43:24.0089518Z [ERROR] Please refer to 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire-reports
>  for the individual test results.
> 2020-10-29T09:43:24.0090427Z [ERROR] Please refer to dump files (if any 
> exist) [date].dump, [date]-jvmRun[N].dump and [date].dumpstream.
> 2020-10-29T09:43:24.0090914Z [ERROR] The forked VM terminated without 
> properly saying goodbye. VM crash or System.exit called?
> 2020-10-29T09:43:24.0093105Z [ERROR] Command was /bin/sh -c cd 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target
>  && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire/surefirebooter6795869883612750001.jar
>  
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire
>  2020-10-29T09-34-47_778-jvmRun2 surefire2269050977160717631tmp 
> surefire_67897497331523564186tmp
> 2020-10-29T09:43:24.0094488Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-10-29T09:43:24.0094797Z [ERROR] Process Exit Code: 143
> 2020-10-29T09:43:24.0095033Z [ERROR] Crashed tests:
> 2020-10-29T09:43:24.0095321Z [ERROR] 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-10-29T09:43:24.0095828Z [ERROR] 
> org.apache.maven.surefire.booter.SurefireBooterForkException: The forked VM 
> terminated without properly saying goodbye. VM crash or System.exit called?
> 2020-10-29T09:43:24.0097838Z [ERROR] Command was /bin/sh -c cd 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target
>  && /usr/lib/jvm/adoptopenjdk-8-hotspot-amd64/jre/bin/java -Xms256m -Xmx2048m 
> -Dmvn.forkNumber=2 -XX:+UseG1GC -jar 
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire/surefirebooter6795869883612750001.jar
>  
> /home/vsts/work/1/s/flink-end-to-end-tests/flink-end-to-end-tests-hbase/target/surefire
>  2020-10-29T09-34-47_778-jvmRun2 surefire2269050977160717631tmp 
> surefire_67897497331523564186tmp
> 2020-10-29T09:43:24.0098966Z [ERROR] Error occurred in starting fork, check 
> output in log
> 2020-10-29T09:43:24.0099266Z [ERROR] Process Exit Code: 143
> 2020-10-29T09:43:24.0099502Z [ERROR] Crashed tests:
> 2020-10-29T09:43:24.0099789Z [ERROR] 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-10-29T09:43:24.0100331Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.fork(ForkStarter.java:669)
> 2020-10-29T09:43:24.0100883Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:282)
> 2020-10-29T09:43:24.0101774Z [ERROR] at 
> org.apache.maven.plugin.surefire.booterclient.ForkStarter.run(ForkStarter.java:245)
> 2020-10-29T09:43:24.0102360Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeProvider(AbstractSurefireMojo.java:1183)
> 2020-10-29T09:43:24.0103004Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:1011)
> 2020-10-29T09:43:24.0103737Z [ERROR] at 
> org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:857)
> 2020-10-29T09:43:24.0104301Z [ERROR] at 
> org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
> 2020-10-29T09:43:24.0104828Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
> 2020-10-29T09:43:24.0105334Z [ERROR] at 
> org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
> 2020-10-29T09:43:24.0105826Z [ERROR] at 
> 

[jira] [Commented] (FLINK-20011) PageRankITCase.testPrintWithRMatGraph hangs

2020-11-08 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-20011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17228306#comment-17228306
 ] 

Dian Fu commented on FLINK-20011:
-

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9307=logs=c5f0071e-1851-543e-9a45-9ac140befc32=1fb1a56f-e8b5-5a82-00a0-a2db7757b4f5

> PageRankITCase.testPrintWithRMatGraph hangs
> ---
>
> Key: FLINK-20011
> URL: https://issues.apache.org/jira/browse/FLINK-20011
> Project: Flink
>  Issue Type: Bug
>  Components: Library / Graph Processing (Gelly), Runtime / 
> Coordination, Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9121=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19]
> {code}
> 2020-11-05T22:42:34.4186647Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa98c00b800 nid=0x32f8 waiting on condition [0x7fa995c12000] 
> 2020-11-05T22:42:34.4187168Z java.lang.Thread.State: WAITING (parking) 
> 2020-11-05T22:42:34.4187563Z at sun.misc.Unsafe.park(Native Method) 
> 2020-11-05T22:42:34.4188246Z - parking to wait for <0x8736d120> (a 
> java.util.concurrent.CompletableFuture$Signaller) 
> 2020-11-05T22:42:34.411Z at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> 2020-11-05T22:42:34.4189351Z at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>  2020-11-05T22:42:34.4189930Z at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) 
> 2020-11-05T22:42:34.4190509Z at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>  2020-11-05T22:42:34.4191059Z at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) 
> 2020-11-05T22:42:34.4191591Z at 
> org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:893)
>  2020-11-05T22:42:34.4192208Z at 
> org.apache.flink.graph.asm.dataset.DataSetAnalyticBase.execute(DataSetAnalyticBase.java:55)
>  2020-11-05T22:42:34.4192787Z at 
> org.apache.flink.graph.drivers.output.Print.write(Print.java:48) 
> 2020-11-05T22:42:34.4193373Z at 
> org.apache.flink.graph.Runner.execute(Runner.java:454) 
> 2020-11-05T22:42:34.4194156Z at 
> org.apache.flink.graph.Runner.main(Runner.java:507) 
> 2020-11-05T22:42:34.4194618Z at 
> org.apache.flink.graph.drivers.DriverBaseITCase.getSystemOutput(DriverBaseITCase.java:208)
>  2020-11-05T22:42:34.4195192Z at 
> org.apache.flink.graph.drivers.DriverBaseITCase.expectedCount(DriverBaseITCase.java:100)
>  2020-11-05T22:42:34.4195914Z at 
> org.apache.flink.graph.drivers.PageRankITCase.testPrintWithRMatGraph(PageRankITCase.java:60)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-20011) PageRankITCase.testPrintWithRMatGraph hangs

2020-11-08 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-20011:

Priority: Critical  (was: Major)

> PageRankITCase.testPrintWithRMatGraph hangs
> ---
>
> Key: FLINK-20011
> URL: https://issues.apache.org/jira/browse/FLINK-20011
> Project: Flink
>  Issue Type: Bug
>  Components: Library / Graph Processing (Gelly), Runtime / 
> Coordination, Runtime / Network
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Critical
>  Labels: test-stability
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=9121=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=80a658d1-f7f6-5d93-2758-53ac19fd5b19]
> {code}
> 2020-11-05T22:42:34.4186647Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa98c00b800 nid=0x32f8 waiting on condition [0x7fa995c12000] 
> 2020-11-05T22:42:34.4187168Z java.lang.Thread.State: WAITING (parking) 
> 2020-11-05T22:42:34.4187563Z at sun.misc.Unsafe.park(Native Method) 
> 2020-11-05T22:42:34.4188246Z - parking to wait for <0x8736d120> (a 
> java.util.concurrent.CompletableFuture$Signaller) 
> 2020-11-05T22:42:34.411Z at 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> 2020-11-05T22:42:34.4189351Z at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>  2020-11-05T22:42:34.4189930Z at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323) 
> 2020-11-05T22:42:34.4190509Z at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>  2020-11-05T22:42:34.4191059Z at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908) 
> 2020-11-05T22:42:34.4191591Z at 
> org.apache.flink.api.java.ExecutionEnvironment.execute(ExecutionEnvironment.java:893)
>  2020-11-05T22:42:34.4192208Z at 
> org.apache.flink.graph.asm.dataset.DataSetAnalyticBase.execute(DataSetAnalyticBase.java:55)
>  2020-11-05T22:42:34.4192787Z at 
> org.apache.flink.graph.drivers.output.Print.write(Print.java:48) 
> 2020-11-05T22:42:34.4193373Z at 
> org.apache.flink.graph.Runner.execute(Runner.java:454) 
> 2020-11-05T22:42:34.4194156Z at 
> org.apache.flink.graph.Runner.main(Runner.java:507) 
> 2020-11-05T22:42:34.4194618Z at 
> org.apache.flink.graph.drivers.DriverBaseITCase.getSystemOutput(DriverBaseITCase.java:208)
>  2020-11-05T22:42:34.4195192Z at 
> org.apache.flink.graph.drivers.DriverBaseITCase.expectedCount(DriverBaseITCase.java:100)
>  2020-11-05T22:42:34.4195914Z at 
> org.apache.flink.graph.drivers.PageRankITCase.testPrintWithRMatGraph(PageRankITCase.java:60)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   >