[jira] [Commented] (FLINK-16572) CheckPubSubEmulatorTest is flaky on Azure

2020-05-19 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111832#comment-17111832
 ] 

Robert Metzger commented on FLINK-16572:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1874&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5

> CheckPubSubEmulatorTest is flaky on Azure
> -
>
> Key: FLINK-16572
> URL: https://issues.apache.org/jira/browse/FLINK-16572
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub, Tests
>Affects Versions: 1.11.0
>Reporter: Aljoscha Krettek
>Assignee: Richard Deurwaarder
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Log: 
> https://dev.azure.com/aljoschakrettek/Flink/_build/results?buildId=56&view=logs&j=1f3ed471-1849-5d3c-a34c-19792af4ad16&t=ce095137-3e3b-5f73-4b79-c42d3d5f8283&l=7842



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12258: [FLINK-17820][task][checkpointing] Don't flush channel state to disk explicitly

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12258:
URL: https://github.com/apache/flink/pull/12258#issuecomment-631108764


   
   ## CI report:
   
   * cf629e225bc323888017be5d5a86c7c89a2b76bd Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1892)
 
   * 8d01ba80d36c07517d7493cef13d6ab634c01e18 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer

2020-05-19 Thread GitBox


flinkbot commented on pull request #12263:
URL: https://github.com/apache/flink/pull/12263#issuecomment-631274882


   
   ## CI report:
   
   * 5e0f9df0a404a5d88b8762238ec37b903b9f0e4b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #12250: [FLINK-17619] Backport to 1.11

2020-05-19 Thread GitBox


wuchong commented on pull request #12250:
URL: https://github.com/apache/flink/pull/12250#issuecomment-631274912


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17825) HA end-to-end gets killed due to timeout

2020-05-19 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111831#comment-17111831
 ] 

Robert Metzger commented on FLINK-17825:


Looks like the timeout mechanism I introduced in FLINK-16423 is somehow broken. 
I will take a look.

> HA end-to-end gets killed due to timeout
> 
>
> Key: FLINK-17825
> URL: https://issues.apache.org/jira/browse/FLINK-17825
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Critical
>  Labels: test-stability
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1867&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
> {code}
> 2020-05-19T20:46:50.9034002Z Killed TM @ 104061
> 2020-05-19T20:47:05.8510180Z Killed TM @ 107775
> 2020-05-19T20:47:55.1181475Z Killed TM @ 108337
> 2020-05-19T20:48:16.7907005Z Test (pid: 89099) did not finish after 540 
> seconds.
> 2020-05-19T20:48:16.790Z Printing Flink logs and killing it:
> [...]
> 2020-05-19T20:48:19.1016912Z 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_ha_datastream.sh:
>  line 125: 89099 Terminated  ( cmdpid=$BASHPID; ( sleep 
> $TEST_TIMEOUT_SECONDS; echo "Test (pid: $cmdpid) did not finish after 
> $TEST_TIMEOUT_SECONDS seconds."; echo "Printing Flink logs and killing it:"; 
> cat ${FLINK_DIR}/log/*; kill "$cmdpid" ) & watchdog_pid=$!; echo 
> $watchdog_pid > $TEST_DATA_DIR/job_watchdog.pid; run_ha_test 4 
> ${STATE_BACKEND_TYPE} ${STATE_BACKEND_FILE_ASYNC} 
> ${STATE_BACKEND_ROCKS_INCREMENTAL} ${ZOOKEEPER_VERSION} )
> 2020-05-19T20:48:19.1017985Z Stopping job timeout watchdog (with pid=89100)
> 2020-05-19T20:48:19.1018621Z 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_ha_datastream.sh:
>  line 112: kill: (89100) - No such process
> 2020-05-19T20:48:19.1019000Z Killing JM watchdog @ 91127
> 2020-05-19T20:48:19.1019199Z Killing TM watchdog @ 91883
> 2020-05-19T20:48:19.1019424Z [FAIL] Test script contains errors.
> 2020-05-19T20:48:19.1019639Z Checking of logs skipped.
> 2020-05-19T20:48:19.1019785Z 
> 2020-05-19T20:48:19.1020329Z [FAIL] 'Running HA (rocks, non-incremental) 
> end-to-end test' failed after 9 minutes and 0 seconds! Test exited with exit 
> code 1
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17622) Remove useless switch for decimal in PostresCatalog

2020-05-19 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17622?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-17622:
---

Assignee: Flavio Pompermaier

> Remove useless switch for decimal in PostresCatalog
> ---
>
> Key: FLINK-17622
> URL: https://issues.apache.org/jira/browse/FLINK-17622
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Flavio Pompermaier
>Assignee: Flavio Pompermaier
>Priority: Major
>  Labels: pull-request-available
>
> Remove the useless switch for decimal fields. The Postgres JDBC connector 
> translate them to numeric



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17825) HA end-to-end gets killed due to timeout

2020-05-19 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17825:
--

 Summary: HA end-to-end gets killed due to timeout
 Key: FLINK-17825
 URL: https://issues.apache.org/jira/browse/FLINK-17825
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination, Tests
Reporter: Robert Metzger
Assignee: Robert Metzger


CI: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1867&view=logs&j=c88eea3b-64a0-564d-0031-9fdcd7b8abee&t=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5
{code}
2020-05-19T20:46:50.9034002Z Killed TM @ 104061
2020-05-19T20:47:05.8510180Z Killed TM @ 107775
2020-05-19T20:47:55.1181475Z Killed TM @ 108337
2020-05-19T20:48:16.7907005Z Test (pid: 89099) did not finish after 540 seconds.
2020-05-19T20:48:16.790Z Printing Flink logs and killing it:

[...]

2020-05-19T20:48:19.1016912Z 
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_ha_datastream.sh: 
line 125: 89099 Terminated  ( cmdpid=$BASHPID; ( sleep 
$TEST_TIMEOUT_SECONDS; echo "Test (pid: $cmdpid) did not finish after 
$TEST_TIMEOUT_SECONDS seconds."; echo "Printing Flink logs and killing it:"; 
cat ${FLINK_DIR}/log/*; kill "$cmdpid" ) & watchdog_pid=$!; echo $watchdog_pid 
> $TEST_DATA_DIR/job_watchdog.pid; run_ha_test 4 ${STATE_BACKEND_TYPE} 
${STATE_BACKEND_FILE_ASYNC} ${STATE_BACKEND_ROCKS_INCREMENTAL} 
${ZOOKEEPER_VERSION} )
2020-05-19T20:48:19.1017985Z Stopping job timeout watchdog (with pid=89100)
2020-05-19T20:48:19.1018621Z 
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/test_ha_datastream.sh: 
line 112: kill: (89100) - No such process
2020-05-19T20:48:19.1019000Z Killing JM watchdog @ 91127
2020-05-19T20:48:19.1019199Z Killing TM watchdog @ 91883
2020-05-19T20:48:19.1019424Z [FAIL] Test script contains errors.
2020-05-19T20:48:19.1019639Z Checking of logs skipped.
2020-05-19T20:48:19.1019785Z 
2020-05-19T20:48:19.1020329Z [FAIL] 'Running HA (rocks, non-incremental) 
end-to-end test' failed after 9 minutes and 0 seconds! Test exited with exit 
code 1
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12181:
URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595


   
   ## CI report:
   
   * bd9add8e480455265ca95b863601f6608918b334 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1907)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr closed pull request #12228: [FLINK-17541][table] Support inline structured types

2020-05-19 Thread GitBox


twalthr closed pull request #12228:
URL: https://github.com/apache/flink/pull/12228


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17817) CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase

2020-05-19 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111829#comment-17111829
 ] 

Caizhi Weng commented on FLINK-17817:
-

I remembered that FLINK-17774 actually solves this problem. To handle object 
reuse, collect sink in FLINK-17774 will serialize the values in {{invoke}} 
method, so there is no serializing in socket server thread. Let's wait for 
FLINK-17774 to be merged so that this problem can also be solved.

> CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase
> -
>
> Key: FLINK-17817
> URL: https://issues.apache.org/jira/browse/FLINK-17817
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1826&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129
> {code}
> 2020-05-19T10:34:18.3224679Z [ERROR] 
> testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase)
>   Time elapsed: 7.537 s  <<< ERROR!
> 2020-05-19T10:34:18.3225273Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2020-05-19T10:34:18.3227634Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:92)
> 2020-05-19T10:34:18.3228518Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:63)
> 2020-05-19T10:34:18.3229170Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.addAll(Iterators.java:361)
> 2020-05-19T10:34:18.3229863Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Lists.newArrayList(Lists.java:160)
> 2020-05-19T10:34:18.3230586Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
> 2020-05-19T10:34:18.3231303Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:141)
> 2020-05-19T10:34:18.3231996Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:107)
> 2020-05-19T10:34:18.3232847Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:176)
> 2020-05-19T10:34:18.3233694Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122)
> 2020-05-19T10:34:18.3234461Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-19T10:34:18.3234983Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-19T10:34:18.3235632Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-19T10:34:18.3236615Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-19T10:34:18.3237256Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-19T10:34:18.3237965Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-19T10:34:18.3238750Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-19T10:34:18.3239314Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-19T10:34:18.3239838Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-19T10:34:18.3240362Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-05-19T10:34:18.3240803Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-19T10:34:18.3243624Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-19T10:34:18.3244531Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-19T10:34:18.3245325Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-19T10:34:18.3246086Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-19T10:34:18.3246765Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T10:34:18.3247390Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T10:34:18.3248012Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T10:34:18.3248779Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java

[GitHub] [flink] TsReaper closed pull request #12262: [FLINK-17817][hotfix] Fix serializer thread safe problem in CollectSinkFunction

2020-05-19 Thread GitBox


TsReaper closed pull request #12262:
URL: https://github.com/apache/flink/pull/12262


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-15792) Make Flink logs accessible via kubectl logs per default

2020-05-19 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-15792:
--
Fix Version/s: (was: 1.10.2)

> Make Flink logs accessible via kubectl logs per default
> ---
>
> Key: FLINK-15792
> URL: https://issues.apache.org/jira/browse/FLINK-15792
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Assignee: Yang Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> I think we should make Flink's logs accessible via {{kubectl logs}} per 
> default. Firstly, this is the idiomatic way to obtain the logs from a 
> container on Kubernetes. Secondly, especially if something does not work and 
> the container cannot start/stops abruptly, there is no way to log into the 
> container and look for the log.file. This makes debugging the setup quite 
> hard.
> I think the best way would be to create the Flink Docker image in such a way 
> that it logs to stdout. In order to allow access to the log file from the web 
> ui, it should also create a log file. One way to achieve this is to add a 
> ConsoleAppender to the respective logging configuration. Another way could be 
> to start the process in the console mode and then to teeing the stdout output 
> into the log file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15793) Move kubernetes-entry.sh out of FLINK_HOME/bin

2020-05-19 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-15793:
--
Fix Version/s: (was: 1.10.2)
   (was: 1.11.0)

> Move kubernetes-entry.sh out of FLINK_HOME/bin
> --
>
> Key: FLINK-15793
> URL: https://issues.apache.org/jira/browse/FLINK-15793
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0
>Reporter: Till Rohrmann
>Priority: Minor
>
> Currently, {{FLINK_HOME/bin}} contains the file {{kubernetes-entry.sh}}. This 
> file is used to customize Flink's default Docker image. I think 
> {{FLINK_HOME/bin}} should not contain files which cannot be directly used. 
> Either we move them to another directory or we incorporate it into Flink's 
> default Docker image. If we opt for the latter option, then this task is 
> related to FLINK-12546.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] TsReaper commented on pull request #12199: [FLINK-17774] [table] supports all kinds of changes for select result

2020-05-19 Thread GitBox


TsReaper commented on pull request #12199:
URL: https://github.com/apache/flink/pull/12199#issuecomment-631271255


   Azure for latest commit already passed in 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1861&view=results



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dawidwys commented on a change in pull request #12075: [FLINK-17004] Document the LIKE clause of CREATE TABLE statement.

2020-05-19 Thread GitBox


dawidwys commented on a change in pull request #12075:
URL: https://github.com/apache/flink/pull/12075#discussion_r427775248



##
File path: docs/dev/table/sql/create.md
##
@@ -208,6 +215,101 @@ The key and value of expression `key1=val1` should both 
be string literal. See d
 
 **Notes:** The table registered with `CREATE TABLE` statement can be used as 
both table source and table sink, we can not decide if it is used as a source 
or sink until it is referenced in the DMLs.
 
+**LIKE clause**
+
+The `LIKE` clause is a variant of SQL features (Feature T171, “LIKE clause in 
table definition” and Feature T173, “Extended LIKE clause in table 
definition”). The clause can be used to create a table based on a definition of 
an existing table. Additionally, users
+can extend the original table or exclude certain parts of it. In contrast to 
the SQL standard the clause must be defined at the top-level of a CREATE 
statement. That is because the clause applies to multiple parts of the 
definition and not only to the schema part.
+
+You can use the clause e.g. to reuse (and potentially overwrite) certain 
connector properties or add watermarks to tables defined externally, e.g. add a 
watermark to a table created in Apache Hive. 
+
+Consider the example statement below:
+{% highlight sql %}
+CREATE TABLE Orders (
+user BIGINT,
+product STRING,
+order_time TIMESTAMP(3)
+) WITH ( 
+'connector' = 'kafka',
+'startup-mode' = 'earliest-offset'
+);
+
+CREATE TABLE Orders_with_watermark (
+-- Add watermark definition
+WATERMARK FOR order_time AS order_time - INTERVAL '5' SECOND 
+) WITH (
+-- Overwrite the startup-mode
+'startup-mode' = 'latest-offset'
+)
+LIKE Orders;
+{% endhighlight %}
+
+The resulting table `Orders_with_watermark` will be equivalent to a table 
created with a following statement:
+{% highlight sql %}
+CREATE TABLE Orders_with_watermark (
+user BIGINT,
+product STRING,
+order_time TIMESTAMP(3),
+WATERMARK FOR order_time AS order_time - INTERVAL '5' SECOND 
+) WITH (
+'connector' = 'kafka',
+'startup-mode' = 'latest-offset'
+);
+{% endhighlight %}
+
+The merging logic of table features can be controlled with `like options`.
+
+You can control the merging behavior of:
+
+* CONSTRAINTS - constraints such as primary and unique keys
+* GENERATED - computed columns
+* OPTIONS - connector options that describe connector and format properties
+* PARTITIONS - partition of the tables
+* WATERMARKS - watermark declarations
+
+with three different merging strategies:
+
+* INCLUDING - Includes the feature of the source table, fails on duplicate 
entries, e.g. if an option with the same key exists in both tables.
+* EXCLUDING - Does not include the given feature of the source table.
+* OVERWRITING - Includes the feature of the source table, overwrites duplicate 
entries of the source table with properties of the new table, e.g. if an option 
with the same key exists in both tables, the one from the current statement 
will be used.
+
+Additionally, you can use the `INCLUDING/EXCLUDING ALL` option to specify what 
should be the strategy if there was no specific strategy defined, i.e. if you 
use `EXCLUDING ALL INCLUDING WATERMARKS` only the watermarks will be included 
from the source table.
+
+Example:
+{% highlight sql %}
+-- A source table stored in a filesystem
+CREATE TABLE Orders_in_file (
+user BIGINT,
+product STRING,
+order_time_string STRING,
+order_time AS to_timestamp(order_time)
+
+)
+PARTITIONED BY user 
+WITH ( 
+'connector' = 'filesystem'
+'path' = '...'
+);
+
+-- A corresponding table we want to store in kafka
+CREATE TABLE Orders_in_kafka (
+-- Add watermark definition
+WATERMARK FOR order_time AS order_time - INTERVAL '5' SECOND 
+) WITH (
+'connector': 'kafka'
+...
+)
+LIKE Orders_in_file (
+-- Exclude everything besides the computed columns which we need to 
generate the watermark for.
+-- We do not want to have the partitions or filesystem options as those do 
not apply to kafka. 
+EXCLUDING ALL
+INCLUDING GENERATED
+);
+{% endhighlight %}
+
+If you provide no like options, by default the planner will use `INCLUDING ALL 
OVERWRITING OPTIONS` options.

Review comment:
   :+1: That was my try on using more active voice. I agree it's 
unnecessary here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr commented on a change in pull request #11986: [FLINK-17361] [jdbc] Added custom query on JDBC tables

2020-05-19 Thread GitBox


twalthr commented on a change in pull request #11986:
URL: https://github.com/apache/flink/pull/11986#discussion_r427773573



##
File path: 
flink-connectors/flink-connector-jdbc/src/test/java/org/apache/flink/connector/jdbc/table/JdbcTableSourceITCase.java
##
@@ -143,20 +149,19 @@ public void testProjectableJdbcSource() throws Exception {
")"
);
 
-   StreamITCase.clear();
-   tEnv.toAppendStream(tEnv.sqlQuery("SELECT timestamp6_col, 
decimal_col FROM " + INPUT_TABLE), Row.class)
-   .addSink(new StreamITCase.StringSink<>());
-   env.execute();
+   TableResult tableResult = tEnv.executeSql("SELECT 
timestamp6_col, decimal_col FROM " + INPUT_TABLE);
+
+   List results = manifestResults(tableResult);
 
-   List expected =
-   Arrays.asList(
-   "2020-01-01T15:35:00.123456,100.1234",
-   "2020-01-01T15:36:01.123456,101.1234");
-   StreamITCase.compareWithList(expected);
+   assertThat(
+   results,
+   containsInAnyOrder(
+   
"2020-01-01T15:35:00.123456,100.1234",

Review comment:
   @aljoscha for the future: please use instances instead of strings





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer

2020-05-19 Thread GitBox


flinkbot commented on pull request #12263:
URL: https://github.com/apache/flink/pull/12263#issuecomment-631267389


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 5e0f9df0a404a5d88b8762238ec37b903b9f0e4b (Wed May 20 
06:34:22 UTC 2020)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twalthr opened a new pull request #12263: [FLINK-16998][core] Support backwards compatibility for upgraded RowSerializer

2020-05-19 Thread GitBox


twalthr opened a new pull request #12263:
URL: https://github.com/apache/flink/pull/12263


   ## What is the purpose of the change
   
   Allows schema migration of the old serialization format for `RowSerializer`. 
The PR also updates the row serializer tests to the new 
`TypeSerializerUpgradeTestBase`.
   
   Since `Row` is `PublicEvolving` we can drop the old serialization format 
soon.
   
   ## Brief change log
   
   See commit messages.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows: 
`RowSerializerUpgradeTest`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: yes
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? JavaDocs
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12262: [FLINK-17817][hotfix] Fix serializer thread safe problem in CollectSinkFunction

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12262:
URL: https://github.com/apache/flink/pull/12262#issuecomment-631257430


   
   ## CI report:
   
   * 62091aabab937a2a802259aface1629fdac676b1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1908)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12262: [FLINK-17817][hotfix] Fix serializer thread safe problem in CollectSinkFunction

2020-05-19 Thread GitBox


flinkbot commented on pull request #12262:
URL: https://github.com/apache/flink/pull/12262#issuecomment-631257430


   
   ## CI report:
   
   * 62091aabab937a2a802259aface1629fdac676b1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12181:
URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595


   
   ## CI report:
   
   * 0bf2aa2f54e22e76fed071e3c614139d4d187fc4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1860)
 
   * bd9add8e480455265ca95b863601f6608918b334 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1907)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi closed pull request #11953: [FLINK-16975][doc] Add docs for FileSystem connector

2020-05-19 Thread GitBox


JingsongLi closed pull request #11953:
URL: https://github.com/apache/flink/pull/11953


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Jiayi-Liao commented on a change in pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel

2020-05-19 Thread GitBox


Jiayi-Liao commented on a change in pull request #12261:
URL: https://github.com/apache/flink/pull/12261#discussion_r427749670



##
File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannel.java
##
@@ -181,6 +181,14 @@ void retriggerSubpartitionRequest(int subpartitionIndex) 
throws IOException {
moreAvailable = !receivedBuffers.isEmpty();
}
 
+   if (next == null) {

Review comment:
   I guess it's theoretically impossible that we get a null buffer here 
with your changes in `releaseAllResources`, which seems to solve two cases you 
mentioned in description. So.. this check is just for other unknown bad cases?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gyfora commented on pull request #12252: [FLINK-17802][kafka] Set offset commit only if group id is configured for new Kafka Table source

2020-05-19 Thread GitBox


gyfora commented on pull request #12252:
URL: https://github.com/apache/flink/pull/12252#issuecomment-631246336


   Looks good +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] gyfora commented on pull request #12254: [FLINK-17802][kafka] Set offset commit only if group id is configured for new Kafka Table source

2020-05-19 Thread GitBox


gyfora commented on pull request #12254:
URL: https://github.com/apache/flink/pull/12254#issuecomment-631245945


   Looks good +1



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] JingsongLi commented on a change in pull request #12262: [FLINK-17817][hotfix] Fix serializer thread safe problem in CollectSinkFunction

2020-05-19 Thread GitBox


JingsongLi commented on a change in pull request #12262:
URL: https://github.com/apache/flink/pull/12262#discussion_r427749438



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/collect/CollectSinkFunction.java
##
@@ -330,6 +331,8 @@ public void setOperatorEventGateway(OperatorEventGateway 
eventGateway) {
private DataOutputViewStreamWrapper outStream;
 
private ServerThread() throws Exception {
+   // serializers are not thread safe
+   this.serializer = 
CollectSinkFunction.this.serializer.duplicate();

Review comment:
   It's too obscure. Can be `private ServerThread(TypeSerializer 
serializer)`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12260:
URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314


   
   ## CI report:
   
   * 7820729185644e576dc8d9c9204f2879a193cba0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1904)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11

2020-05-19 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17822:
---
Fix Version/s: 1.11.0

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in Java 11 
> --
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8846842Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8847711Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8848295Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invo

[jira] [Updated] (FLINK-17814) Translate native kubernetes document to Chinese

2020-05-19 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-17814:

Component/s: Documentation
 chinese-translation

> Translate native kubernetes document to Chinese
> ---
>
> Key: FLINK-17814
> URL: https://issues.apache.org/jira/browse/FLINK-17814
> Project: Flink
>  Issue Type: Task
>  Components: chinese-translation, Documentation
>Reporter: Yang Wang
>Priority: Major
>
> [https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/native_kubernetes.html]
>  
> Translate the native kubernetes document to Chinese.
> English updated in 7723774a0402e10bc914b1fa6128e3c80678dafe



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12262: [FLINK-17817][hotfix] Fix serializer thread safe problem in CollectSinkFunction

2020-05-19 Thread GitBox


flinkbot commented on pull request #12262:
URL: https://github.com/apache/flink/pull/12262#issuecomment-631243945


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 62091aabab937a2a802259aface1629fdac676b1 (Wed May 20 
05:23:32 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-17817).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17819) Yarn error unhelpful when forgetting HADOOP_CLASSPATH

2020-05-19 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17819:
---
Labels: usability  (was: )

> Yarn error unhelpful when forgetting HADOOP_CLASSPATH
> -
>
> Key: FLINK-17819
> URL: https://issues.apache.org/jira/browse/FLINK-17819
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Arvid Heise
>Priority: Major
>  Labels: usability
>
> When running
> {code:bash}
> flink run -m yarn-cluster -yjm 1768 -ytm 50072 -ys 32 ...
> {code}
> without some export HADOOP_CLASSPATH, we get the unhelpful message
> {noformat}
> Could not build the program from JAR file: JAR file does not exist: -yjm
> {noformat}
> I'd expect something like
> {noformat}
> yarn-cluster can only be used with exported HADOOP_CLASSPATH, see  for 
> more information{noformat}
>  
> I suggest to load a stub for YarnCluster deployment if the actual 
> implementation fails to load, which prints this error when used.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out

2020-05-19 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111788#comment-17111788
 ] 

Robert Metzger commented on FLINK-17730:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1888&view=logs&j=ba53eb01-1462-56a3-8e98-0dd97fbcaab5&t=eb5f4d19-2d2d-5856-a4ce-acf5f904a994

> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart
>  times out
> 
>
> Key: FLINK-17730
> URL: https://issues.apache.org/jira/browse/FLINK-17730
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
> After 5 minutes 
> {code}
> 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000]
> 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE
> 2020-05-15T06:56:38.1689028Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-05-15T06:56:38.1689496Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-05-15T06:56:38.1689921Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-05-15T06:56:38.1690316Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-05-15T06:56:38.1690723Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-05-15T06:56:38.1691196Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-05-15T06:56:38.1691608Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-05-15T06:56:38.1692023Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-05-15T06:56:38.1692558Z  - locked <0xb94644f8> (a 
> java.lang.Object)
> 2020-05-15T06:56:38.1692946Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-05-15T06:56:38.1693371Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-05-15T06:56:38.1694151Z  - locked <0xb9464d20> (a 
> sun.security.ssl.AppInputStream)
> 2020-05-15T06:56:38.1694908Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-05-15T06:56:38.1695475Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-05-15T06:56:38.1696007Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-05-15T06:56:38.1696509Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-05-15T06:56:38.1696993Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1697466Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1698069Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1698567Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699041Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699624Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1700090Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1700584Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-05-15T06:56:38.1701282Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1701800Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-05-15T06:56:38.1702328Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1702804Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-05-15T06:56:38.1703270Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown 
> Source)
> 2020-05-15T06:56:38.1703677Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-05-15T06:56:38.1704090Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-05-15T06:56:38.1704607Z  at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source)
> 2020-05-15T06:5

[jira] [Reopened] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out

2020-05-19 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reopened FLINK-17730:


> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart
>  times out
> 
>
> Key: FLINK-17730
> URL: https://issues.apache.org/jira/browse/FLINK-17730
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
> After 5 minutes 
> {code}
> 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000]
> 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE
> 2020-05-15T06:56:38.1689028Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-05-15T06:56:38.1689496Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-05-15T06:56:38.1689921Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-05-15T06:56:38.1690316Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-05-15T06:56:38.1690723Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-05-15T06:56:38.1691196Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-05-15T06:56:38.1691608Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-05-15T06:56:38.1692023Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-05-15T06:56:38.1692558Z  - locked <0xb94644f8> (a 
> java.lang.Object)
> 2020-05-15T06:56:38.1692946Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-05-15T06:56:38.1693371Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-05-15T06:56:38.1694151Z  - locked <0xb9464d20> (a 
> sun.security.ssl.AppInputStream)
> 2020-05-15T06:56:38.1694908Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-05-15T06:56:38.1695475Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-05-15T06:56:38.1696007Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-05-15T06:56:38.1696509Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-05-15T06:56:38.1696993Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1697466Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1698069Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1698567Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699041Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699624Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1700090Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1700584Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-05-15T06:56:38.1701282Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1701800Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-05-15T06:56:38.1702328Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1702804Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-05-15T06:56:38.1703270Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown 
> Source)
> 2020-05-15T06:56:38.1703677Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-05-15T06:56:38.1704090Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-05-15T06:56:38.1704607Z  at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source)
> 2020-05-15T06:56:38.1705115Z  at 
> org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:317)
> 2020-05-15T06:56:38.1705551Z  at 
> org.apache.hadoop.fs.s3a.Invoker.retry(Invoker.java:256)
> 2020-05-15T06:56:38.1705937Z  at 

[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11

2020-05-19 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger updated FLINK-17822:
---
Priority: Blocker  (was: Major)

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in Java 11 
> --
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Blocker
>  Labels: test-stability
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8846842Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8847711Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8848295Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?

[jira] [Updated] (FLINK-17817) CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase

2020-05-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17817:
---
Labels: pull-request-available test-stability  (was: test-stability)

> CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase
> -
>
> Key: FLINK-17817
> URL: https://issues.apache.org/jira/browse/FLINK-17817
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1826&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129
> {code}
> 2020-05-19T10:34:18.3224679Z [ERROR] 
> testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase)
>   Time elapsed: 7.537 s  <<< ERROR!
> 2020-05-19T10:34:18.3225273Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2020-05-19T10:34:18.3227634Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:92)
> 2020-05-19T10:34:18.3228518Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:63)
> 2020-05-19T10:34:18.3229170Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.addAll(Iterators.java:361)
> 2020-05-19T10:34:18.3229863Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Lists.newArrayList(Lists.java:160)
> 2020-05-19T10:34:18.3230586Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
> 2020-05-19T10:34:18.3231303Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:141)
> 2020-05-19T10:34:18.3231996Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:107)
> 2020-05-19T10:34:18.3232847Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:176)
> 2020-05-19T10:34:18.3233694Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122)
> 2020-05-19T10:34:18.3234461Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-19T10:34:18.3234983Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-19T10:34:18.3235632Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-19T10:34:18.3236615Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-19T10:34:18.3237256Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-19T10:34:18.3237965Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-19T10:34:18.3238750Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-19T10:34:18.3239314Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-19T10:34:18.3239838Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-19T10:34:18.3240362Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-05-19T10:34:18.3240803Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-19T10:34:18.3243624Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-19T10:34:18.3244531Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-19T10:34:18.3245325Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-19T10:34:18.3246086Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-19T10:34:18.3246765Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T10:34:18.3247390Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T10:34:18.3248012Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T10:34:18.3248779Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-19T10:34:18.3249417Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-19T10:34:18.3250357Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-05-19T10:34:18.3251021Z  at 
> org.junit.rules.Exter

[GitHub] [flink] TsReaper opened a new pull request #12262: [FLINK-17817][hotfix] Fix serializer thread safe problem in CollectSinkFunction

2020-05-19 Thread GitBox


TsReaper opened a new pull request #12262:
URL: https://github.com/apache/flink/pull/12262


   ## What is the purpose of the change
   
   This is a hot fix for `CollectSinkFunction`. `TypeSerializer`s are not 
thread safe but currently `CollectSinkFunction` reuses them among two threads. 
This PR fixes this problem.
   
   ## Brief change log
   
   - Fix serializer thread safe problem in CollectSinkFunction
   
   ## Verifying this change
   
   This change is already covered by existing tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17745) PackagedProgram' extractedTempLibraries and jarfiles may be duplicate

2020-05-19 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111784#comment-17111784
 ] 

Yang Wang commented on FLINK-17745:
---

[~Echo Lee] So this is not a problem and we could directly close this ticket. 
Right?

As a follow-up, maybe we need to supplement the document for the structure of 
fat jar.

> PackagedProgram' extractedTempLibraries and jarfiles may be duplicate
> -
>
> Key: FLINK-17745
> URL: https://issues.apache.org/jira/browse/FLINK-17745
> Project: Flink
>  Issue Type: Improvement
>  Components: Client / Job Submission
>Reporter: Echo Lee
>Assignee: Kostas Kloudas
>Priority: Major
>  Labels: pull-request-available
>
> When i submit a flink app with a fat jar, PackagedProgram will extracted temp 
> libraries by the fat jar, and add to pipeline.jars, and the pipeline.jars 
> contains  fat jar and temp libraries. I don't think we should add fat jar to 
> the pipeline.jars if extractedTempLibraries is not empty.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17824) "Resuming Savepoint" e2e stalls indefinitely

2020-05-19 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-17824:
--

 Summary: "Resuming Savepoint" e2e stalls indefinitely 
 Key: FLINK-17824
 URL: https://issues.apache.org/jira/browse/FLINK-17824
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing, Tests
Reporter: Robert Metzger


CI; 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8

{code}
2020-05-19T21:05:52.9696236Z 
==
2020-05-19T21:05:52.9696860Z Running 'Resuming Savepoint (file, async, scale 
down) end-to-end test'
2020-05-19T21:05:52.9697243Z 
==
2020-05-19T21:05:52.9713094Z TEST_DATA_DIR: 
/home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-52970362751
2020-05-19T21:05:53.1194478Z Flink dist directory: 
/home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT
2020-05-19T21:05:53.2180375Z Starting cluster.
2020-05-19T21:05:53.9986167Z Starting standalonesession daemon on host fv-az558.
2020-05-19T21:05:55.5997224Z Starting taskexecutor daemon on host fv-az558.
2020-05-19T21:05:55.6223837Z Waiting for Dispatcher REST endpoint to come up...
2020-05-19T21:05:57.0552482Z Waiting for Dispatcher REST endpoint to come up...
2020-05-19T21:05:57.9446865Z Waiting for Dispatcher REST endpoint to come up...
2020-05-19T21:05:59.0098434Z Waiting for Dispatcher REST endpoint to come up...
2020-05-19T21:06:00.0569710Z Dispatcher REST endpoint is up.
2020-05-19T21:06:07.7099937Z Job (a92a74de8446a80403798bb4806b73f3) is running.
2020-05-19T21:06:07.7855906Z Waiting for job to process up to 200 records, 
current progress: 114 records ...
2020-05-19T21:06:55.5755111Z 
2020-05-19T21:06:55.5756550Z 

2020-05-19T21:06:55.5757225Z  The program finished with the following exception:
2020-05-19T21:06:55.5757566Z 
2020-05-19T21:06:55.5765453Z org.apache.flink.util.FlinkException: Could not 
stop with a savepoint job "a92a74de8446a80403798bb4806b73f3".
2020-05-19T21:06:55.5766873Zat 
org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:485)
2020-05-19T21:06:55.5767980Zat 
org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:854)
2020-05-19T21:06:55.5769014Zat 
org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:477)
2020-05-19T21:06:55.5770052Zat 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:921)
2020-05-19T21:06:55.5771107Zat 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982)
2020-05-19T21:06:55.5772223Zat 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
2020-05-19T21:06:55.5773325Zat 
org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982)
2020-05-19T21:06:55.5774871Z Caused by: 
java.util.concurrent.ExecutionException: 
java.util.concurrent.CompletionException: 
java.util.concurrent.CompletionException: 
org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint Coordinator 
is suspending.
2020-05-19T21:06:55.5777183Zat 
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
2020-05-19T21:06:55.5778884Zat 
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
2020-05-19T21:06:55.5779920Zat 
org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:483)
2020-05-19T21:06:55.5781175Z... 6 more
2020-05-19T21:06:55.5782391Z Caused by: 
java.util.concurrent.CompletionException: 
java.util.concurrent.CompletionException: 
org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint Coordinator 
is suspending.
2020-05-19T21:06:55.5783885Zat 
org.apache.flink.runtime.scheduler.SchedulerBase.lambda$stopWithSavepoint$9(SchedulerBase.java:890)
2020-05-19T21:06:55.5784992Zat 
java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
2020-05-19T21:06:55.5786492Zat 
java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
2020-05-19T21:06:55.5787601Zat 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
2020-05-19T21:06:55.5788682Zat 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:402)
2020-05-19T21:06:55.5790308Zat 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:195)
2020-05-19T21:06:55.5791664Zat 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
2020-05-19T21:06:55.5792767Zat 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
2020-05-19T21:06:55.5793756Zat 
akka.jap

[jira] [Assigned] (FLINK-17824) "Resuming Savepoint" e2e stalls indefinitely

2020-05-19 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-17824:
--

Assignee: Robert Metzger

> "Resuming Savepoint" e2e stalls indefinitely 
> -
>
> Key: FLINK-17824
> URL: https://issues.apache.org/jira/browse/FLINK-17824
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> CI; 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8
> {code}
> 2020-05-19T21:05:52.9696236Z 
> ==
> 2020-05-19T21:05:52.9696860Z Running 'Resuming Savepoint (file, async, scale 
> down) end-to-end test'
> 2020-05-19T21:05:52.9697243Z 
> ==
> 2020-05-19T21:05:52.9713094Z TEST_DATA_DIR: 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-52970362751
> 2020-05-19T21:05:53.1194478Z Flink dist directory: 
> /home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT
> 2020-05-19T21:05:53.2180375Z Starting cluster.
> 2020-05-19T21:05:53.9986167Z Starting standalonesession daemon on host 
> fv-az558.
> 2020-05-19T21:05:55.5997224Z Starting taskexecutor daemon on host fv-az558.
> 2020-05-19T21:05:55.6223837Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-05-19T21:05:57.0552482Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-05-19T21:05:57.9446865Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-05-19T21:05:59.0098434Z Waiting for Dispatcher REST endpoint to come 
> up...
> 2020-05-19T21:06:00.0569710Z Dispatcher REST endpoint is up.
> 2020-05-19T21:06:07.7099937Z Job (a92a74de8446a80403798bb4806b73f3) is 
> running.
> 2020-05-19T21:06:07.7855906Z Waiting for job to process up to 200 records, 
> current progress: 114 records ...
> 2020-05-19T21:06:55.5755111Z 
> 2020-05-19T21:06:55.5756550Z 
> 
> 2020-05-19T21:06:55.5757225Z  The program finished with the following 
> exception:
> 2020-05-19T21:06:55.5757566Z 
> 2020-05-19T21:06:55.5765453Z org.apache.flink.util.FlinkException: Could not 
> stop with a savepoint job "a92a74de8446a80403798bb4806b73f3".
> 2020-05-19T21:06:55.5766873Z  at 
> org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:485)
> 2020-05-19T21:06:55.5767980Z  at 
> org.apache.flink.client.cli.CliFrontend.runClusterAction(CliFrontend.java:854)
> 2020-05-19T21:06:55.5769014Z  at 
> org.apache.flink.client.cli.CliFrontend.stop(CliFrontend.java:477)
> 2020-05-19T21:06:55.5770052Z  at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:921)
> 2020-05-19T21:06:55.5771107Z  at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982)
> 2020-05-19T21:06:55.5772223Z  at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
> 2020-05-19T21:06:55.5773325Z  at 
> org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982)
> 2020-05-19T21:06:55.5774871Z Caused by: 
> java.util.concurrent.ExecutionException: 
> java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> Coordinator is suspending.
> 2020-05-19T21:06:55.5777183Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-05-19T21:06:55.5778884Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1928)
> 2020-05-19T21:06:55.5779920Z  at 
> org.apache.flink.client.cli.CliFrontend.lambda$stop$5(CliFrontend.java:483)
> 2020-05-19T21:06:55.5781175Z  ... 6 more
> 2020-05-19T21:06:55.5782391Z Caused by: 
> java.util.concurrent.CompletionException: 
> java.util.concurrent.CompletionException: 
> org.apache.flink.runtime.checkpoint.CheckpointException: Checkpoint 
> Coordinator is suspending.
> 2020-05-19T21:06:55.5783885Z  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.lambda$stopWithSavepoint$9(SchedulerBase.java:890)
> 2020-05-19T21:06:55.5784992Z  at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:836)
> 2020-05-19T21:06:55.5786492Z  at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
> 2020-05-19T21:06:55.5787601Z  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
> 2020-05-19T21:06:55.5788682Z  at 
> org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRp

[jira] [Closed] (FLINK-17821) Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP

2020-05-19 Thread Zhu Zhu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhu Zhu closed FLINK-17821.
---
Resolution: Duplicate

> Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP
> 
>
> Key: FLINK-17821
> URL: https://issues.apache.org/jira/browse/FLINK-17821
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.0
>Reporter: Zhu Zhu
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1871&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8&l=12032
> 2020-05-19T16:29:40.7239430Z Test testKafkaSourceSink[legacy = false, topicId 
> = 1](org.apache.flink.streaming.connectors.kafka.table.Kafka010TableITCase) 
> failed with:
> 2020-05-19T16:29:40.7240291Z java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-19T16:29:40.7241033Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-05-19T16:29:40.7241542Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-05-19T16:29:40.7242127Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:31)
> 2020-05-19T16:29:40.7242729Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala)
> 2020-05-19T16:29:40.7243239Z  at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.testKafkaSourceSink(KafkaTableTestBase.java:145)
> 2020-05-19T16:29:40.7243691Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-19T16:29:40.7244273Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-19T16:29:40.7244729Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-19T16:29:40.7245117Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-19T16:29:40.7245515Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-19T16:29:40.7245956Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-19T16:29:40.7246419Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-19T16:29:40.7246870Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-19T16:29:40.7247287Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-19T16:29:40.7251320Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-19T16:29:40.7251833Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-19T16:29:40.7252251Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-19T16:29:40.7252716Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-19T16:29:40.7253117Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T16:29:40.7253502Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T16:29:40.7254041Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T16:29:40.7254528Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-19T16:29:40.7255500Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-19T16:29:40.7256064Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-05-19T16:29:40.7256438Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-05-19T16:29:40.7256758Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-05-19T16:29:40.7257118Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T16:29:40.7257486Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T16:29:40.7257885Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T16:29:40.7258389Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-19T16:29:40.7258821Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-19T16:29:40.7259219Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-19T16:29:40.7259664Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-05-19T16:29:40.7260098Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
> 2020-05-19T16:29:40.7260635Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-

[jira] [Commented] (FLINK-17821) Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP

2020-05-19 Thread Zhu Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111779#comment-17111779
 ] 

Zhu Zhu commented on FLINK-17821:
-

[~wanglijie95] yes, it's the same root cause. 
Thanks for the information!

> Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP
> 
>
> Key: FLINK-17821
> URL: https://issues.apache.org/jira/browse/FLINK-17821
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.0
>Reporter: Zhu Zhu
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1871&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8&l=12032
> 2020-05-19T16:29:40.7239430Z Test testKafkaSourceSink[legacy = false, topicId 
> = 1](org.apache.flink.streaming.connectors.kafka.table.Kafka010TableITCase) 
> failed with:
> 2020-05-19T16:29:40.7240291Z java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-19T16:29:40.7241033Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-05-19T16:29:40.7241542Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-05-19T16:29:40.7242127Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:31)
> 2020-05-19T16:29:40.7242729Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala)
> 2020-05-19T16:29:40.7243239Z  at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.testKafkaSourceSink(KafkaTableTestBase.java:145)
> 2020-05-19T16:29:40.7243691Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-19T16:29:40.7244273Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-19T16:29:40.7244729Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-19T16:29:40.7245117Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-19T16:29:40.7245515Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-19T16:29:40.7245956Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-19T16:29:40.7246419Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-19T16:29:40.7246870Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-19T16:29:40.7247287Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-19T16:29:40.7251320Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-19T16:29:40.7251833Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-19T16:29:40.7252251Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-19T16:29:40.7252716Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-19T16:29:40.7253117Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T16:29:40.7253502Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T16:29:40.7254041Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T16:29:40.7254528Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-19T16:29:40.7255500Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-19T16:29:40.7256064Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-05-19T16:29:40.7256438Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-05-19T16:29:40.7256758Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-05-19T16:29:40.7257118Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T16:29:40.7257486Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T16:29:40.7257885Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T16:29:40.7258389Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-19T16:29:40.7258821Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-19T16:29:40.7259219Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-19T16:29:40.7259664Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-05-19T16:29:40.7260098Z  at 
> org.junit.rules.ExternalResource$1.evaluate(ExternalResour

[GitHub] [flink] flinkbot edited a comment on pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12261:
URL: https://github.com/apache/flink/pull/12261#issuecomment-631229356


   
   ## CI report:
   
   * 26afeb03aa30f84994a8aa85ca2d223d44672067 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1905)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12181: [FLINK-17645][runtime] Fix SafetyNetCloseableRegistry constructor bug.

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12181:
URL: https://github.com/apache/flink/pull/12181#issuecomment-629344595


   
   ## CI report:
   
   * 0bf2aa2f54e22e76fed071e3c614139d4d187fc4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1860)
 
   * bd9add8e480455265ca95b863601f6608918b334 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12260:
URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314


   
   ## CI report:
   
   * 7820729185644e576dc8d9c9204f2879a193cba0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1904)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] twentyworld commented on pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…

2020-05-19 Thread GitBox


twentyworld commented on pull request #12237:
URL: https://github.com/apache/flink/pull/12237#issuecomment-631234411


   谢谢,这里的很多指引,都是我翻译之前所不知道的。
   我会把各位的comments都重新整理一下,同时根据指引文档重新走一遍,通过build这些文档测试一下。
   如果有任何疑问,我会提出问题并寻求解答的。



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel

2020-05-19 Thread GitBox


flinkbot commented on pull request #12261:
URL: https://github.com/apache/flink/pull/12261#issuecomment-631229356


   
   ## CI report:
   
   * 26afeb03aa30f84994a8aa85ca2d223d44672067 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-19 Thread GitBox


flinkbot commented on pull request #12260:
URL: https://github.com/apache/flink/pull/12260#issuecomment-631229314


   
   ## CI report:
   
   * 7820729185644e576dc8d9c9204f2879a193cba0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12240: [FLINK-15792][k8s] Make Flink logs accessible via kubectl logs per default

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12240:
URL: https://github.com/apache/flink/pull/12240#issuecomment-630661048


   
   ## CI report:
   
   * fc462938ff28feca6fd689f6e51e1fca79efe975 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1901)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11175: [FLINK-16197][hive] Failed to query partitioned table when partition …

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #11175:
URL: https://github.com/apache/flink/pull/11175#issuecomment-589671100


   
   ## CI report:
   
   * 7cf8bc2371f60ce02daec08bda96b30e8ab94a32 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1900)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17817) CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase

2020-05-19 Thread Caizhi Weng (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111754#comment-17111754
 ] 

Caizhi Weng commented on FLINK-17817:
-

Thanks for the report. This is because type serializers are not thread safe but 
I didn't duplicate it in the sink function. I'll fix this immediately.

> CollectResultFetcher fails with EOFException in AggregateReduceGroupingITCase
> -
>
> Key: FLINK-17817
> URL: https://issues.apache.org/jira/browse/FLINK-17817
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Blocker
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1826&view=logs&j=e25d5e7e-2a9c-5589-4940-0b638d75a414&t=f83cd372-208c-5ec4-12a8-337462457129
> {code}
> 2020-05-19T10:34:18.3224679Z [ERROR] 
> testSingleAggOnTable_SortAgg(org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase)
>   Time elapsed: 7.537 s  <<< ERROR!
> 2020-05-19T10:34:18.3225273Z java.lang.RuntimeException: Failed to fetch next 
> result
> 2020-05-19T10:34:18.3227634Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.nextResultFromFetcher(CollectResultIterator.java:92)
> 2020-05-19T10:34:18.3228518Z  at 
> org.apache.flink.streaming.api.operators.collect.CollectResultIterator.hasNext(CollectResultIterator.java:63)
> 2020-05-19T10:34:18.3229170Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Iterators.addAll(Iterators.java:361)
> 2020-05-19T10:34:18.3229863Z  at 
> org.apache.flink.shaded.guava18.com.google.common.collect.Lists.newArrayList(Lists.java:160)
> 2020-05-19T10:34:18.3230586Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.executeQuery(BatchTestBase.scala:300)
> 2020-05-19T10:34:18.3231303Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.check(BatchTestBase.scala:141)
> 2020-05-19T10:34:18.3231996Z  at 
> org.apache.flink.table.planner.runtime.utils.BatchTestBase.checkResult(BatchTestBase.scala:107)
> 2020-05-19T10:34:18.3232847Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable(AggregateReduceGroupingITCase.scala:176)
> 2020-05-19T10:34:18.3233694Z  at 
> org.apache.flink.table.planner.runtime.batch.sql.agg.AggregateReduceGroupingITCase.testSingleAggOnTable_SortAgg(AggregateReduceGroupingITCase.scala:122)
> 2020-05-19T10:34:18.3234461Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-19T10:34:18.3234983Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-19T10:34:18.3235632Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-19T10:34:18.3236615Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-19T10:34:18.3237256Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-19T10:34:18.3237965Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-19T10:34:18.3238750Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-19T10:34:18.3239314Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-19T10:34:18.3239838Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-19T10:34:18.3240362Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-05-19T10:34:18.3240803Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-19T10:34:18.3243624Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-19T10:34:18.3244531Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-19T10:34:18.3245325Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-19T10:34:18.3246086Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-19T10:34:18.3246765Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T10:34:18.3247390Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T10:34:18.3248012Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T10:34:18.3248779Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-19T10:34:18.3249417Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-19T10:34:18.3250357Z  at 
> org.junit.rules.Extern

[GitHub] [flink] flinkbot commented on pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel

2020-05-19 Thread GitBox


flinkbot commented on pull request #12261:
URL: https://github.com/apache/flink/pull/12261#issuecomment-631228177


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 26afeb03aa30f84994a8aa85ca2d223d44672067 (Wed May 20 
04:25:40 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17823) Resolve the race condition while releasing RemoteInputChannel

2020-05-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17823:
---
Labels: pull-request-available  (was: )

> Resolve the race condition while releasing RemoteInputChannel
> -
>
> Key: FLINK-17823
> URL: https://issues.apache.org/jira/browse/FLINK-17823
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.11.0
>Reporter: Zhijiang
>Assignee: Zhijiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> RemoteInputChannel#releaseAllResources might be called by canceler thread. 
> Meanwhile, the task thread can also call RemoteInputChannel#getNextBuffer. 
> There probably cause two potential problems:
>  * Task thread might get null buffer after canceler thread already released 
> all the buffers, then it might cause misleading NPE in getNextBuffer.
>  * Task thread and canceler thread might pull the same buffer concurrently, 
> which causes unexpected exception when the same buffer is recycled twice.
> The solution is to properly synchronize the buffer queue in release method to 
> avoid the same buffer pulled by both canceler thread and task thread. And in 
> getNextBuffer method, we add some explicit checks to avoid misleading NPE and 
> hint some valid exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW opened a new pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel

2020-05-19 Thread GitBox


zhijiangW opened a new pull request #12261:
URL: https://github.com/apache/flink/pull/12261


   
   ## What is the purpose of the change
   
   RemoteInputChannel#releaseAllResources might be called by canceler thread. 
Meanwhile, the task thread can also call RemoteInputChannel#getNextBuffer.
   There probably cause two potential problems:
   
   1. Task thread might get null buffer after canceler thread already released 
all the buffers, then it might cause misleading NPE in getNextBuffer.
   2. Task thread and canceler thread might pull the same buffer concurrently, 
which causes unexpected exception when the same buffer is recycled twice.
   
   The solution is to properly synchronize the buffer queue in release method 
to avoid the same buffer pulled by both canceler thread and task thread.
   And in getNextBuffer method, we add some explicit checks to avoid misleading 
NPE and hint some valid exceptions.
   
   ## Brief change log
   
 - Fix the synchronized `receivedBuffers` in 
`RemoteInputChannel#releaseAllResources`
 - check the released state and give proper exceptions in 
`RemoteInputChannel#getNextBuffer`
   
   ## Verifying this change
   
   New unit test in 
`RemoteInputChannelTest#testConcurrentGetNextBufferAndRelease
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (**not applicable** / docs / 
JavaDocs / not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17823) Resolve the race condition while releasing RemoteInputChannel

2020-05-19 Thread Zhijiang (Jira)
Zhijiang created FLINK-17823:


 Summary: Resolve the race condition while releasing 
RemoteInputChannel
 Key: FLINK-17823
 URL: https://issues.apache.org/jira/browse/FLINK-17823
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Network
Affects Versions: 1.11.0
Reporter: Zhijiang
Assignee: Zhijiang
 Fix For: 1.11.0


RemoteInputChannel#releaseAllResources might be called by canceler thread. 
Meanwhile, the task thread can also call RemoteInputChannel#getNextBuffer. 
There probably cause two potential problems:
 * Task thread might get null buffer after canceler thread already released all 
the buffers, then it might cause misleading NPE in getNextBuffer.
 * Task thread and canceler thread might pull the same buffer concurrently, 
which causes unexpected exception when the same buffer is recycled twice.

The solution is to properly synchronize the buffer queue in release method to 
avoid the same buffer pulled by both canceler thread and task thread. And in 
getNextBuffer method, we add some explicit checks to avoid misleading NPE and 
hint some valid exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong commented on pull request #11906: [FLINK-17356][jdbc][postgres] Support PK and Unique constraints

2020-05-19 Thread GitBox


wuchong commented on pull request #11906:
URL: https://github.com/apache/flink/pull/11906#issuecomment-631225241


   Thanks @fpompermaier , it looks good to me in general. I added an IT case to 
verify group by query can be inserted into a primary keyed postgres catalog 
table (this is the purpose of FLINK-17762). Besides, I slightly updated the 
`getPrimaryKey` to make it return an optional constraint instead of nullable 
constraint. I hope that's ok.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12240: [FLINK-15792][k8s] Make Flink logs accessible via kubectl logs per default

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12240:
URL: https://github.com/apache/flink/pull/12240#issuecomment-630661048


   
   ## CI report:
   
   * 7ae117dbf4d94f345f70d6f1e8cec97f71086a36 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1820)
 
   * fc462938ff28feca6fd689f6e51e1fca79efe975 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1901)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12246: [FLINK-17303][python] Return TableResult for Python TableEnvironment

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12246:
URL: https://github.com/apache/flink/pull/12246#issuecomment-630803193


   
   ## CI report:
   
   * 911e459fe53b61aa74ce3bc3d0761651eb7f61fb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1893)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12230:
URL: https://github.com/apache/flink/pull/12230#issuecomment-630205457


   
   ## CI report:
   
   * 5b8eb4accc4478106f3e842ba18a1abc11194a43 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1845)
 
   * 2f0ca570ff878cd12f999570590a08fa75efcc6b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1903)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-19 Thread GitBox


flinkbot commented on pull request #12260:
URL: https://github.com/apache/flink/pull/12260#issuecomment-631223911


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7820729185644e576dc8d9c9204f2879a193cba0 (Wed May 20 
04:08:12 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17189) Table with processing time attribute can not be read from Hive catalog

2020-05-19 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17189:
---
Labels: pull-request-available  (was: )

> Table with processing time attribute can not be read from Hive catalog
> --
>
> Key: FLINK-17189
> URL: https://issues.apache.org/jira/browse/FLINK-17189
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem, Table SQL / Planner
>Affects Versions: 1.10.1
>Reporter: Timo Walther
>Assignee: Jingsong Lee
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0, 1.10.2
>
>
> DDL:
> {code}
> CREATE TABLE PROD_LINEITEM (
>   L_ORDERKEY   INTEGER,
>   L_PARTKEYINTEGER,
>   L_SUPPKEYINTEGER,
>   L_LINENUMBER INTEGER,
>   L_QUANTITY   DOUBLE,
>   L_EXTENDEDPRICE  DOUBLE,
>   L_DISCOUNT   DOUBLE,
>   L_TAXDOUBLE,
>   L_CURRENCY   STRING,
>   L_RETURNFLAG STRING,
>   L_LINESTATUS STRING,
>   L_ORDERTIME  TIMESTAMP(3),
>   L_SHIPINSTRUCT   STRING,
>   L_SHIPMODE   STRING,
>   L_COMMENTSTRING,
>   WATERMARK FOR L_ORDERTIME AS L_ORDERTIME - INTERVAL '5' MINUTE,
>   L_PROCTIME   AS PROCTIME()
> ) WITH (
>   'connector.type' = 'kafka',
>   'connector.version' = 'universal',
>   'connector.topic' = 'Lineitem',
>   'connector.properties.zookeeper.connect' = 'not-needed',
>   'connector.properties.bootstrap.servers' = 'kafka:9092',
>   'connector.startup-mode' = 'earliest-offset',
>   'format.type' = 'csv',
>   'format.field-delimiter' = '|'
> );
> {code}
> Query:
> {code}
> SELECT * FROM prod_lineitem;
> {code}
> Result:
> {code}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.AssertionError: Conversion to relational algebra failed to preserve 
> datatypes:
> validated type:
> RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER 
> L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, 
> DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME 
> ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" 
> L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIMESTAMP(3) NOT NULL 
> L_PROCTIME) NOT NULL
> converted type:
> RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER 
> L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, 
> DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME 
> ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" 
> L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIME 
> ATTRIBUTE(PROCTIME) NOT NULL L_PROCTIME) NOT NULL
> rel:
> LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], 
> L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], 
> L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], 
> L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], 
> L_PROCTIME=[$15])
>   LogicalWatermarkAssigner(rowtime=[L_ORDERTIME], watermark=[-($11, 
> 30:INTERVAL MINUTE)])
> LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], 
> L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], 
> L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], 
> L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], 
> L_PROCTIME=[PROCTIME()])
>   LogicalTableScan(table=[[hcat, default, prod_lineitem, source: 
> [KafkaTableSource(L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, 
> L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_CURRENCY, L_RETURNFLAG, L_LINESTATUS, 
> L_ORDERTIME, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT)]]])
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi opened a new pull request #12260: [FLINK-17189][table-planner] Table with proctime attribute cannot be read from Hive catalog

2020-05-19 Thread GitBox


JingsongLi opened a new pull request #12260:
URL: https://github.com/apache/flink/pull/12260


   
   ## What is the purpose of the change
   
   ```
   CREATE TABLE PROD_LINEITEM (
 ...
 L_ORDERTIME  TIMESTAMP(3),
 WATERMARK FOR L_ORDERTIME AS L_ORDERTIME - INTERVAL '5' MINUTE,
 L_PROCTIME   AS PROCTIME()
   ) WITH (...)
   SELECT * FROM prod_lineitem;
   ```
   Failed by `AssertionError: Conversion to relational algebra failed to 
preserve datatypes`.
   
   ## Brief change log
   
   `TableSourceUtil.getSourceRowType` should not only adjust rowtime from 
watermarkSpec, but also adjust proctime fields from computed column.
   
   ## Verifying this change
   
   `HiveCatalogITCase.testReadWriteCsvWithProctime`
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-12030) KafkaITCase.testMultipleSourcesOnePartition is unstable: This server does not host this topic-partition

2020-05-19 Thread Jiangjie Qin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiangjie Qin resolved FLINK-12030.
--
Resolution: Fixed

Patch merged.
master: 51a0d42ade8ee3789036ac1ee7c121133b58212a
release-1.11: 0f072234d5cd30879b4e4845e69bee1a03cf1817

> KafkaITCase.testMultipleSourcesOnePartition is unstable: This server does not 
> host this topic-partition
> ---
>
> Key: FLINK-12030
> URL: https://issues.apache.org/jira/browse/FLINK-12030
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0
>Reporter: Aljoscha Krettek
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> This is a relevant part from the log:
> {code}
> 14:11:45,305 INFO  org.apache.flink.streaming.connectors.kafka.KafkaITCase
>- 
> 
> Test 
> testMetricsAndEndOfStream(org.apache.flink.streaming.connectors.kafka.KafkaITCase)
>  is running.
> 
> 14:11:45,310 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- 
> ===
> == Writing sequence of 300 into testEndOfStream with p=1
> ===
> 14:11:45,311 INFO  org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- Writing attempt #1
> 14:11:45,316 INFO  
> org.apache.flink.streaming.connectors.kafka.KafkaTestEnvironmentImpl  - 
> Creating topic testEndOfStream-1
> 14:11:45,863 WARN  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer  - Property 
> [transaction.timeout.ms] not specified. Setting it to 360 ms
> 14:11:45,910 WARN  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer  - Using 
> AT_LEAST_ONCE semantic, but checkpointing is not enabled. Switching to NONE 
> semantic.
> 14:11:45,921 INFO  
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer  - Starting 
> FlinkKafkaInternalProducer (1/1) to produce into default topic 
> testEndOfStream-1
> 14:11:46,006 ERROR org.apache.flink.streaming.connectors.kafka.KafkaTestBase  
>- Write attempt failed, trying again
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
>   at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
>   at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:638)
>   at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:79)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.writeSequence(KafkaConsumerTestBase.java:1918)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runEndOfStreamTest(KafkaConsumerTestBase.java:1537)
>   at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testMetricsAndEndOfStream(KafkaITCase.java:136)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> Caused by: org.apache.flink.streaming.connectors.kafka.FlinkKafkaException: 
> Failed to send data to Kafka: This server does not host this topic-partition.
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.checkErroneous(FlinkKafkaProducer.java:1002)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.flush(FlinkKafkaProducer.java:787)
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.close(FlinkKafkaProducer.java:658)
>   at 
> org.apache.flink.api.common.functions.util.FunctionUtils.closeFunction(FunctionUtils.java:43)
>   at 
> org.

[jira] [Updated] (FLINK-15303) support predicate pushdown for sources in hive connector

2020-05-19 Thread Danny Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Danny Chen updated FLINK-15303:
---
Fix Version/s: (was: 1.11.0)

> support predicate pushdown for sources in hive connector 
> -
>
> Key: FLINK-15303
> URL: https://issues.apache.org/jira/browse/FLINK-15303
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / Hive
>Reporter: Bowen Li
>Assignee: Jingsong Lee
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin commented on pull request #12255: [FLINK-12030][connector/kafka] Check the topic existence after topic creation using KafkaConsumer

2020-05-19 Thread GitBox


becketqin commented on pull request #12255:
URL: https://github.com/apache/flink/pull/12255#issuecomment-631223120


   Patch merged.
   master: 51a0d42ade8ee3789036ac1ee7c121133b58212a
   release-1.11: 0f072234d5cd30879b4e4845e69bee1a03cf1817



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin closed pull request #12255: [FLINK-12030][connector/kafka] Check the topic existence after topic creation using KafkaConsumer

2020-05-19 Thread GitBox


becketqin closed pull request #12255:
URL: https://github.com/apache/flink/pull/12255


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17189) Table with processing time attribute can not be read from Hive catalog

2020-05-19 Thread Jingsong Lee (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111731#comment-17111731
 ] 

Jingsong Lee commented on FLINK-17189:
--

{{TableSourceUtil.getSourceRowType}} should not only adjust rowtime, but also 
adjust proctime fields. I will create a PR for fixing.

> Table with processing time attribute can not be read from Hive catalog
> --
>
> Key: FLINK-17189
> URL: https://issues.apache.org/jira/browse/FLINK-17189
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem, Table SQL / Planner
>Affects Versions: 1.10.1
>Reporter: Timo Walther
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.11.0, 1.10.2
>
>
> DDL:
> {code}
> CREATE TABLE PROD_LINEITEM (
>   L_ORDERKEY   INTEGER,
>   L_PARTKEYINTEGER,
>   L_SUPPKEYINTEGER,
>   L_LINENUMBER INTEGER,
>   L_QUANTITY   DOUBLE,
>   L_EXTENDEDPRICE  DOUBLE,
>   L_DISCOUNT   DOUBLE,
>   L_TAXDOUBLE,
>   L_CURRENCY   STRING,
>   L_RETURNFLAG STRING,
>   L_LINESTATUS STRING,
>   L_ORDERTIME  TIMESTAMP(3),
>   L_SHIPINSTRUCT   STRING,
>   L_SHIPMODE   STRING,
>   L_COMMENTSTRING,
>   WATERMARK FOR L_ORDERTIME AS L_ORDERTIME - INTERVAL '5' MINUTE,
>   L_PROCTIME   AS PROCTIME()
> ) WITH (
>   'connector.type' = 'kafka',
>   'connector.version' = 'universal',
>   'connector.topic' = 'Lineitem',
>   'connector.properties.zookeeper.connect' = 'not-needed',
>   'connector.properties.bootstrap.servers' = 'kafka:9092',
>   'connector.startup-mode' = 'earliest-offset',
>   'format.type' = 'csv',
>   'format.field-delimiter' = '|'
> );
> {code}
> Query:
> {code}
> SELECT * FROM prod_lineitem;
> {code}
> Result:
> {code}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.AssertionError: Conversion to relational algebra failed to preserve 
> datatypes:
> validated type:
> RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER 
> L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, 
> DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME 
> ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" 
> L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIMESTAMP(3) NOT NULL 
> L_PROCTIME) NOT NULL
> converted type:
> RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER 
> L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, 
> DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME 
> ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" 
> L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIME 
> ATTRIBUTE(PROCTIME) NOT NULL L_PROCTIME) NOT NULL
> rel:
> LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], 
> L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], 
> L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], 
> L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], 
> L_PROCTIME=[$15])
>   LogicalWatermarkAssigner(rowtime=[L_ORDERTIME], watermark=[-($11, 
> 30:INTERVAL MINUTE)])
> LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], 
> L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], 
> L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], 
> L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], 
> L_PROCTIME=[PROCTIME()])
>   LogicalTableScan(table=[[hcat, default, prod_lineitem, source: 
> [KafkaTableSource(L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, 
> L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_CURRENCY, L_RETURNFLAG, L_LINESTATUS, 
> L_ORDERTIME, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT)]]])
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17189) Table with processing time attribute can not be read from Hive catalog

2020-05-19 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-17189:
-
Affects Version/s: 1.10.1

> Table with processing time attribute can not be read from Hive catalog
> --
>
> Key: FLINK-17189
> URL: https://issues.apache.org/jira/browse/FLINK-17189
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem, Table SQL / Planner
>Affects Versions: 1.10.1
>Reporter: Timo Walther
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.11.0, 1.10.2
>
>
> DDL:
> {code}
> CREATE TABLE PROD_LINEITEM (
>   L_ORDERKEY   INTEGER,
>   L_PARTKEYINTEGER,
>   L_SUPPKEYINTEGER,
>   L_LINENUMBER INTEGER,
>   L_QUANTITY   DOUBLE,
>   L_EXTENDEDPRICE  DOUBLE,
>   L_DISCOUNT   DOUBLE,
>   L_TAXDOUBLE,
>   L_CURRENCY   STRING,
>   L_RETURNFLAG STRING,
>   L_LINESTATUS STRING,
>   L_ORDERTIME  TIMESTAMP(3),
>   L_SHIPINSTRUCT   STRING,
>   L_SHIPMODE   STRING,
>   L_COMMENTSTRING,
>   WATERMARK FOR L_ORDERTIME AS L_ORDERTIME - INTERVAL '5' MINUTE,
>   L_PROCTIME   AS PROCTIME()
> ) WITH (
>   'connector.type' = 'kafka',
>   'connector.version' = 'universal',
>   'connector.topic' = 'Lineitem',
>   'connector.properties.zookeeper.connect' = 'not-needed',
>   'connector.properties.bootstrap.servers' = 'kafka:9092',
>   'connector.startup-mode' = 'earliest-offset',
>   'format.type' = 'csv',
>   'format.field-delimiter' = '|'
> );
> {code}
> Query:
> {code}
> SELECT * FROM prod_lineitem;
> {code}
> Result:
> {code}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.AssertionError: Conversion to relational algebra failed to preserve 
> datatypes:
> validated type:
> RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER 
> L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, 
> DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME 
> ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" 
> L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIMESTAMP(3) NOT NULL 
> L_PROCTIME) NOT NULL
> converted type:
> RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER 
> L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, 
> DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME 
> ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" 
> L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIME 
> ATTRIBUTE(PROCTIME) NOT NULL L_PROCTIME) NOT NULL
> rel:
> LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], 
> L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], 
> L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], 
> L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], 
> L_PROCTIME=[$15])
>   LogicalWatermarkAssigner(rowtime=[L_ORDERTIME], watermark=[-($11, 
> 30:INTERVAL MINUTE)])
> LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], 
> L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], 
> L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], 
> L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], 
> L_PROCTIME=[PROCTIME()])
>   LogicalTableScan(table=[[hcat, default, prod_lineitem, source: 
> [KafkaTableSource(L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, 
> L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_CURRENCY, L_RETURNFLAG, L_LINESTATUS, 
> L_ORDERTIME, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT)]]])
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17189) Table with processing time attribute can not be read from Hive catalog

2020-05-19 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-17189:


Assignee: Jingsong Lee

> Table with processing time attribute can not be read from Hive catalog
> --
>
> Key: FLINK-17189
> URL: https://issues.apache.org/jira/browse/FLINK-17189
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem, Table SQL / Planner
>Reporter: Timo Walther
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.11.0, 1.10.2
>
>
> DDL:
> {code}
> CREATE TABLE PROD_LINEITEM (
>   L_ORDERKEY   INTEGER,
>   L_PARTKEYINTEGER,
>   L_SUPPKEYINTEGER,
>   L_LINENUMBER INTEGER,
>   L_QUANTITY   DOUBLE,
>   L_EXTENDEDPRICE  DOUBLE,
>   L_DISCOUNT   DOUBLE,
>   L_TAXDOUBLE,
>   L_CURRENCY   STRING,
>   L_RETURNFLAG STRING,
>   L_LINESTATUS STRING,
>   L_ORDERTIME  TIMESTAMP(3),
>   L_SHIPINSTRUCT   STRING,
>   L_SHIPMODE   STRING,
>   L_COMMENTSTRING,
>   WATERMARK FOR L_ORDERTIME AS L_ORDERTIME - INTERVAL '5' MINUTE,
>   L_PROCTIME   AS PROCTIME()
> ) WITH (
>   'connector.type' = 'kafka',
>   'connector.version' = 'universal',
>   'connector.topic' = 'Lineitem',
>   'connector.properties.zookeeper.connect' = 'not-needed',
>   'connector.properties.bootstrap.servers' = 'kafka:9092',
>   'connector.startup-mode' = 'earliest-offset',
>   'format.type' = 'csv',
>   'format.field-delimiter' = '|'
> );
> {code}
> Query:
> {code}
> SELECT * FROM prod_lineitem;
> {code}
> Result:
> {code}
> [ERROR] Could not execute SQL statement. Reason:
> java.lang.AssertionError: Conversion to relational algebra failed to preserve 
> datatypes:
> validated type:
> RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER 
> L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, 
> DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME 
> ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" 
> L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIMESTAMP(3) NOT NULL 
> L_PROCTIME) NOT NULL
> converted type:
> RecordType(INTEGER L_ORDERKEY, INTEGER L_PARTKEY, INTEGER L_SUPPKEY, INTEGER 
> L_LINENUMBER, DOUBLE L_QUANTITY, DOUBLE L_EXTENDEDPRICE, DOUBLE L_DISCOUNT, 
> DOUBLE L_TAX, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_CURRENCY, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_RETURNFLAG, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_LINESTATUS, TIME 
> ATTRIBUTE(ROWTIME) L_ORDERTIME, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" 
> L_SHIPINSTRUCT, VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_SHIPMODE, 
> VARCHAR(2147483647) CHARACTER SET "UTF-16LE" L_COMMENT, TIME 
> ATTRIBUTE(PROCTIME) NOT NULL L_PROCTIME) NOT NULL
> rel:
> LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], 
> L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], 
> L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], 
> L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], 
> L_PROCTIME=[$15])
>   LogicalWatermarkAssigner(rowtime=[L_ORDERTIME], watermark=[-($11, 
> 30:INTERVAL MINUTE)])
> LogicalProject(L_ORDERKEY=[$0], L_PARTKEY=[$1], L_SUPPKEY=[$2], 
> L_LINENUMBER=[$3], L_QUANTITY=[$4], L_EXTENDEDPRICE=[$5], L_DISCOUNT=[$6], 
> L_TAX=[$7], L_CURRENCY=[$8], L_RETURNFLAG=[$9], L_LINESTATUS=[$10], 
> L_ORDERTIME=[$11], L_SHIPINSTRUCT=[$12], L_SHIPMODE=[$13], L_COMMENT=[$14], 
> L_PROCTIME=[PROCTIME()])
>   LogicalTableScan(table=[[hcat, default, prod_lineitem, source: 
> [KafkaTableSource(L_ORDERKEY, L_PARTKEY, L_SUPPKEY, L_LINENUMBER, L_QUANTITY, 
> L_EXTENDEDPRICE, L_DISCOUNT, L_TAX, L_CURRENCY, L_RETURNFLAG, L_LINESTATUS, 
> L_ORDERTIME, L_SHIPINSTRUCT, L_SHIPMODE, L_COMMENT)]]])
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12230:
URL: https://github.com/apache/flink/pull/12230#issuecomment-630205457


   
   ## CI report:
   
   * 5b8eb4accc4478106f3e842ba18a1abc11194a43 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1845)
 
   * 2f0ca570ff878cd12f999570590a08fa75efcc6b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-630047332


   
   ## CI report:
   
   * 906be78b0943a61b70d4624b95bad5479c9f3d92 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1896)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11175: [FLINK-16197][hive] Failed to query partitioned table when partition …

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #11175:
URL: https://github.com/apache/flink/pull/11175#issuecomment-589671100


   
   ## CI report:
   
   * f41f4359a68f8c9b85a33d3414bf346e02c17d6a Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1842)
 
   * 7cf8bc2371f60ce02daec08bda96b30e8ab94a32 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1900)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-19 Thread GitBox


yangyichao-mango commented on a change in pull request #12230:
URL: https://github.com/apache/flink/pull/12230#discussion_r427725970



##
File path: docs/getting-started/index.zh.md
##
@@ -27,54 +27,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-There are many ways to get started with Apache Flink. Which one is the best for
-you depends on your goals and prior experience:
+上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。
 
-* take a look at the **Docker Playgrounds** if you want to see what Flink can 
do, via a hands-on,
-  docker-based introduction to specific Flink concepts
-* explore one of the **Code Walkthroughs** if you want a quick, end-to-end
-  introduction to one of Flink's APIs
-* work your way through the **Hands-on Training** for a comprehensive,
-  step-by-step introduction to Flink
-* use **Project Setup** if you already know the basics of Flink and want a
-  project template for Java or Scala, or need help setting up the dependencies
+* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。
+* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。
+* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。
+* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。
 
-### Taking a first look at Flink
+### 初识 Flink
 
-The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。
 
-* The [**Operations Playground**]({% link 
getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+* [**Flink Operations 
Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 
Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。
 
 
 
-### First steps with one of Flink's APIs
+### Flink API 入门
 
-The **Code Walkthroughs** are a great way to get started quickly with a 
step-by-step introduction to
-one of Flink's APIs. Each walkthrough provides instructions for bootstrapping 
a small skeleton
-project, and then shows how to extend it to a simple application.
+**代码练习**是快速入门的最佳方式,通过代码练习可以逐步深入地理解 Flink API。每个示例都演示了如何构建基础的 Flink 
代码框架,并如何逐步将其扩展为简单的应用程序。
 
-* The [**DataStream API**  code walkthrough]({% link 
getting-started/walkthroughs/datastream_api.md %}) shows how
-  to implement a simple DataStream application and how to extend it to be 
stateful and use timers.
-  The DataStream API is Flink's main abstraction for implementing stateful 
streaming applications
-  with sophisticated time semantics in Java or Scala.
+
+* [**DataStream API 示例**](./walkthroughs/datastream_api.html) 展示了如何实现一个基本的 
DataStream 应用程序,并把它扩展成有状态的应用程序。DataStream API 是 Flink 的主要抽象,可用于在 Java 或 Scala 
语言中实现具有复杂时间语义的有状态数据流处理的应用程序。
 
-* Flink's **Table API** is a relational API used for writing SQL-like queries 
in Java, Scala, or
-  Python, which are then automatically optimized, and can be executed on batch 
or streaming data
-  with identical syntax and semantics. The [Table API code walkthrough for 
Java and Scala]({% link
-  getting-started/walkthroughs/table_api.md %}) shows how to implement a 
simple Table API query on a
-  batch source and how to evolve it into a continuous query on a streaming 
source. There's also a
-  similar [code walkthrough for the Python Table API]({% link
-  getting-started/walkthroughs/python_table_api.md %}).
+* **Table API** 是 Flink 的语言嵌入式关系 API,用于在 Java,Scala 或 Python 中编写类 SQL 
的查询,并且这些查询会自动进行优化。Table API 查询可以使用一致的语法和语义同时在批处理或流数据上运行。[Table API code 
walkthrough for Java and Scala](./walkthroughs/table_api.html) 演示了如何在批处理中简单的使用 
Table API 进行查询,以及如何将其扩展为流处理中的查询。Python Table API 同上 [code walkthrough for the 
Python Table API](./walkthroughs/python_table_api.html)。
 
-### Taking a Deep Dive with the Hands-on Training
+### 通过实操进一步探索 Flink
 
-The [**Hands-on Training**]({% link training/index.md %}) is a self-paced 
training course with
-a set of lessons and hands-on exercises. This step-by-step introduction to 
Flink focuses
-on learning how to use the DataStream API to meet the needs of common, 
real-world use cases,
-and provides a complete introduction to the fundamental concepts: parallel 
dataflows,
-stateful stream processing, event time and watermarking, and fault tolerance 
via state snapshots.
+[Hands-on Training](/zh/training/index.html) 是一系列可供自主学习的练习课程。这些课程会循序渐进的介绍 
Flink,包括如何使用 DataStream API 来满足常见的、真实的需求场景,并提供对 Flink 中并行数据流(parallel 
dataflows)、有状态流式处理(stateful stream processing)、Event 
Time、Watermarking、通过状态快照实现容错(fault tolerance via state snapshots)等基本概念的完整介绍。

Review comment:
   我这边的外链是和旧版本翻译中的外链保持了一致,需要的话我尝试改成原文中新版本的外链





This is an a

[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-19 Thread GitBox


yangyichao-mango commented on a change in pull request #12230:
URL: https://github.com/apache/flink/pull/12230#discussion_r427726043



##
File path: docs/getting-started/index.zh.md
##
@@ -27,54 +27,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-There are many ways to get started with Apache Flink. Which one is the best for
-you depends on your goals and prior experience:
+上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。
 
-* take a look at the **Docker Playgrounds** if you want to see what Flink can 
do, via a hands-on,
-  docker-based introduction to specific Flink concepts
-* explore one of the **Code Walkthroughs** if you want a quick, end-to-end
-  introduction to one of Flink's APIs
-* work your way through the **Hands-on Training** for a comprehensive,
-  step-by-step introduction to Flink
-* use **Project Setup** if you already know the basics of Flink and want a
-  project template for Java or Scala, or need help setting up the dependencies
+* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。
+* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。
+* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。
+* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。
 
-### Taking a first look at Flink
+### 初识 Flink
 
-The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。
 
-* The [**Operations Playground**]({% link 
getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+* [**Flink Operations 
Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 
Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。
 
 
 
-### First steps with one of Flink's APIs
+### Flink API 入门
 
-The **Code Walkthroughs** are a great way to get started quickly with a 
step-by-step introduction to
-one of Flink's APIs. Each walkthrough provides instructions for bootstrapping 
a small skeleton
-project, and then shows how to extend it to a simple application.
+**代码练习**是快速入门的最佳方式,通过代码练习可以逐步深入地理解 Flink API。每个示例都演示了如何构建基础的 Flink 
代码框架,并如何逐步将其扩展为简单的应用程序。
 
-* The [**DataStream API**  code walkthrough]({% link 
getting-started/walkthroughs/datastream_api.md %}) shows how
-  to implement a simple DataStream application and how to extend it to be 
stateful and use timers.
-  The DataStream API is Flink's main abstraction for implementing stateful 
streaming applications
-  with sophisticated time semantics in Java or Scala.
+
+* [**DataStream API 示例**](./walkthroughs/datastream_api.html) 展示了如何实现一个基本的 
DataStream 应用程序,并把它扩展成有状态的应用程序。DataStream API 是 Flink 的主要抽象,可用于在 Java 或 Scala 
语言中实现具有复杂时间语义的有状态数据流处理的应用程序。
 
-* Flink's **Table API** is a relational API used for writing SQL-like queries 
in Java, Scala, or
-  Python, which are then automatically optimized, and can be executed on batch 
or streaming data
-  with identical syntax and semantics. The [Table API code walkthrough for 
Java and Scala]({% link
-  getting-started/walkthroughs/table_api.md %}) shows how to implement a 
simple Table API query on a
-  batch source and how to evolve it into a continuous query on a streaming 
source. There's also a
-  similar [code walkthrough for the Python Table API]({% link
-  getting-started/walkthroughs/python_table_api.md %}).
+* **Table API** 是 Flink 的语言嵌入式关系 API,用于在 Java,Scala 或 Python 中编写类 SQL 
的查询,并且这些查询会自动进行优化。Table API 查询可以使用一致的语法和语义同时在批处理或流数据上运行。[Table API code 
walkthrough for Java and Scala](./walkthroughs/table_api.html) 演示了如何在批处理中简单的使用 
Table API 进行查询,以及如何将其扩展为流处理中的查询。Python Table API 同上 [code walkthrough for the 
Python Table API](./walkthroughs/python_table_api.html)。
 
-### Taking a Deep Dive with the Hands-on Training
+### 通过实操进一步探索 Flink
 
-The [**Hands-on Training**]({% link training/index.md %}) is a self-paced 
training course with
-a set of lessons and hands-on exercises. This step-by-step introduction to 
Flink focuses
-on learning how to use the DataStream API to meet the needs of common, 
real-world use cases,
-and provides a complete introduction to the fundamental concepts: parallel 
dataflows,
-stateful stream processing, event time and watermarking, and fault tolerance 
via state snapshots.
+[Hands-on Training](/zh/training/index.html) 是一系列可供自主学习的练习课程。这些课程会循序渐进的介绍 
Flink,包括如何使用 DataStream API 来满足常见的、真实的需求场景,并提供对 Flink 中并行数据流(parallel 
dataflows)、有状态流式处理(stateful stream processing)、Event 
Time、Watermarking、通过状态快照实现容错(fault tolerance via state snapshots)等基本概念的完整介绍。

Review comment:
   辛苦~





This is an automated message from the Apache Git S

[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-19 Thread GitBox


yangyichao-mango commented on a change in pull request #12230:
URL: https://github.com/apache/flink/pull/12230#discussion_r427725668



##
File path: docs/getting-started/index.zh.md
##
@@ -27,54 +27,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-There are many ways to get started with Apache Flink. Which one is the best for
-you depends on your goals and prior experience:
+上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。
 
-* take a look at the **Docker Playgrounds** if you want to see what Flink can 
do, via a hands-on,
-  docker-based introduction to specific Flink concepts
-* explore one of the **Code Walkthroughs** if you want a quick, end-to-end
-  introduction to one of Flink's APIs
-* work your way through the **Hands-on Training** for a comprehensive,
-  step-by-step introduction to Flink
-* use **Project Setup** if you already know the basics of Flink and want a
-  project template for Java or Scala, or need help setting up the dependencies
+* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。
+* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。
+* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。
+* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。
 
-### Taking a first look at Flink
+### 初识 Flink
 
-The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。
 
-* The [**Operations Playground**]({% link 
getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+* [**Flink Operations 
Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 
Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。
 
 
 
-### First steps with one of Flink's APIs
+### Flink API 入门
 
-The **Code Walkthroughs** are a great way to get started quickly with a 
step-by-step introduction to
-one of Flink's APIs. Each walkthrough provides instructions for bootstrapping 
a small skeleton
-project, and then shows how to extend it to a simple application.
+**代码练习**是快速入门的最佳方式,通过代码练习可以逐步深入地理解 Flink API。每个示例都演示了如何构建基础的 Flink 
代码框架,并如何逐步将其扩展为简单的应用程序。
 
-* The [**DataStream API**  code walkthrough]({% link 
getting-started/walkthroughs/datastream_api.md %}) shows how
-  to implement a simple DataStream application and how to extend it to be 
stateful and use timers.
-  The DataStream API is Flink's main abstraction for implementing stateful 
streaming applications
-  with sophisticated time semantics in Java or Scala.
+
+* [**DataStream API 示例**](./walkthroughs/datastream_api.html) 展示了如何实现一个基本的 
DataStream 应用程序,并把它扩展成有状态的应用程序。DataStream API 是 Flink 的主要抽象,可用于在 Java 或 Scala 
语言中实现具有复杂时间语义的有状态数据流处理的应用程序。
 
-* Flink's **Table API** is a relational API used for writing SQL-like queries 
in Java, Scala, or
-  Python, which are then automatically optimized, and can be executed on batch 
or streaming data
-  with identical syntax and semantics. The [Table API code walkthrough for 
Java and Scala]({% link
-  getting-started/walkthroughs/table_api.md %}) shows how to implement a 
simple Table API query on a
-  batch source and how to evolve it into a continuous query on a streaming 
source. There's also a
-  similar [code walkthrough for the Python Table API]({% link
-  getting-started/walkthroughs/python_table_api.md %}).
+* **Table API** 是 Flink 的语言嵌入式关系 API,用于在 Java,Scala 或 Python 中编写类 SQL 
的查询,并且这些查询会自动进行优化。Table API 查询可以使用一致的语法和语义同时在批处理或流数据上运行。[Table API code 
walkthrough for Java and Scala](./walkthroughs/table_api.html) 演示了如何在批处理中简单的使用 
Table API 进行查询,以及如何将其扩展为流处理中的查询。Python Table API 同上 [code walkthrough for the 
Python Table API](./walkthroughs/python_table_api.html)。

Review comment:
   ”语言嵌入式关系 API“也是旧版本的翻译,我在下一个commit中重新进行翻译下哈,谢谢~





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-19 Thread GitBox


yangyichao-mango commented on a change in pull request #12230:
URL: https://github.com/apache/flink/pull/12230#discussion_r427725466



##
File path: docs/getting-started/index.zh.md
##
@@ -27,54 +27,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-There are many ways to get started with Apache Flink. Which one is the best for
-you depends on your goals and prior experience:
+上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。
 
-* take a look at the **Docker Playgrounds** if you want to see what Flink can 
do, via a hands-on,
-  docker-based introduction to specific Flink concepts
-* explore one of the **Code Walkthroughs** if you want a quick, end-to-end
-  introduction to one of Flink's APIs
-* work your way through the **Hands-on Training** for a comprehensive,
-  step-by-step introduction to Flink
-* use **Project Setup** if you already know the basics of Flink and want a
-  project template for Java or Scala, or need help setting up the dependencies
+* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。
+* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。
+* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。
+* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。
 
-### Taking a first look at Flink
+### 初识 Flink
 
-The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。
 
-* The [**Operations Playground**]({% link 
getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+* [**Flink Operations 
Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 
Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。
 
 
 
-### First steps with one of Flink's APIs
+### Flink API 入门
 
-The **Code Walkthroughs** are a great way to get started quickly with a 
step-by-step introduction to
-one of Flink's APIs. Each walkthrough provides instructions for bootstrapping 
a small skeleton
-project, and then shows how to extend it to a simple application.
+**代码练习**是快速入门的最佳方式,通过代码练习可以逐步深入地理解 Flink API。每个示例都演示了如何构建基础的 Flink 
代码框架,并如何逐步将其扩展为简单的应用程序。
 
-* The [**DataStream API**  code walkthrough]({% link 
getting-started/walkthroughs/datastream_api.md %}) shows how
-  to implement a simple DataStream application and how to extend it to be 
stateful and use timers.
-  The DataStream API is Flink's main abstraction for implementing stateful 
streaming applications
-  with sophisticated time semantics in Java or Scala.
+

[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-19 Thread GitBox


yangyichao-mango commented on a change in pull request #12230:
URL: https://github.com/apache/flink/pull/12230#discussion_r427725122



##
File path: docs/getting-started/index.zh.md
##
@@ -27,54 +27,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-There are many ways to get started with Apache Flink. Which one is the best for
-you depends on your goals and prior experience:
+上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。
 
-* take a look at the **Docker Playgrounds** if you want to see what Flink can 
do, via a hands-on,
-  docker-based introduction to specific Flink concepts
-* explore one of the **Code Walkthroughs** if you want a quick, end-to-end
-  introduction to one of Flink's APIs
-* work your way through the **Hands-on Training** for a comprehensive,
-  step-by-step introduction to Flink
-* use **Project Setup** if you already know the basics of Flink and want a
-  project template for Java or Scala, or need help setting up the dependencies
+* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。
+* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。
+* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。
+* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。
 
-### Taking a first look at Flink
+### 初识 Flink
 
-The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。
 
-* The [**Operations Playground**]({% link 
getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+* [**Flink Operations 
Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 
Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。

Review comment:
   
这部分是旧版本的中文翻译,由于我做的这个issue中的英文改动没有涉及到这一部分,所以我就没有改动这部分的中文翻译,如果需要重新翻译的话我可以翻译后提交新commit





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yangyichao-mango commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-19 Thread GitBox


yangyichao-mango commented on a change in pull request #12230:
URL: https://github.com/apache/flink/pull/12230#discussion_r427724564



##
File path: docs/getting-started/index.zh.md
##
@@ -27,54 +27,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-There are many ways to get started with Apache Flink. Which one is the best for
-you depends on your goals and prior experience:
+上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。
 
-* take a look at the **Docker Playgrounds** if you want to see what Flink can 
do, via a hands-on,
-  docker-based introduction to specific Flink concepts
-* explore one of the **Code Walkthroughs** if you want a quick, end-to-end
-  introduction to one of Flink's APIs
-* work your way through the **Hands-on Training** for a comprehensive,
-  step-by-step introduction to Flink
-* use **Project Setup** if you already know the basics of Flink and want a
-  project template for Java or Scala, or need help setting up the dependencies
+* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。

Review comment:
   嗯嗯,改为 阅读 XXX 可以 YYY 我觉得会更通顺,稍后我会提交新commit





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17565) Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-19 Thread Yang Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111718#comment-17111718
 ] 

Yang Wang commented on FLINK-17565:
---

I have upgraded the priority to critical so that it could be tracked in the 
1.11 kanban[1]. It should be fixed before the release RC.

 

 

[1]. 
[https://issues.apache.org/jira/secure/RapidBoard.jspa?rapidView=364&projectKey=FLINK]

> Bump fabric8 version from 4.5.2 to 4.9.2
> 
>
> Key: FLINK-17565
> URL: https://issues.apache.org/jira/browse/FLINK-17565
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently, we are using a version of 4.5.2, it's better that we upgrade it to 
> 4.9.2, some of the reasons are as follows:
>  # It removed the use of reapers manually doing cascade deletion of 
> resources, leave it up to Kubernetes APIServer, which solves the issue of 
> FLINK-17566, more info: 
> [https://github.com/fabric8io/kubernetes-client/issues/1880]
>  # It introduced a regression in building Quantity values in 4.7.0, release 
> note [https://github.com/fabric8io/kubernetes-client/issues/1953].
>  # It provided better support for K8s 1.17, release note: 
> [https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0].
> For more release notes, please refer to [fabric8 
> releases|https://github.com/fabric8io/kubernetes-client/releases].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17565) Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-19 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-17565:
--
Priority: Critical  (was: Major)

> Bump fabric8 version from 4.5.2 to 4.9.2
> 
>
> Key: FLINK-17565
> URL: https://issues.apache.org/jira/browse/FLINK-17565
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently, we are using a version of 4.5.2, it's better that we upgrade it to 
> 4.9.2, some of the reasons are as follows:
>  # It removed the use of reapers manually doing cascade deletion of 
> resources, leave it up to Kubernetes APIServer, which solves the issue of 
> FLINK-17566, more info: 
> [https://github.com/fabric8io/kubernetes-client/issues/1880]
>  # It introduced a regression in building Quantity values in 4.7.0, release 
> note [https://github.com/fabric8io/kubernetes-client/issues/1953].
>  # It provided better support for K8s 1.17, release note: 
> [https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0].
> For more release notes, please refer to [fabric8 
> releases|https://github.com/fabric8io/kubernetes-client/releases].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17565) Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-19 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-17565:
--
Description: 
Currently, we are using a version of 4.5.2, it's better that we upgrade it to 
4.9.2, some of the reasons are as follows:
 # It removed the use of reapers manually doing cascade deletion of resources, 
leave it up to Kubernetes APIServer, which solves the issue of FLINK-17566, 
more info: [https://github.com/fabric8io/kubernetes-client/issues/1880]
 # It introduced a regression in building Quantity values in 4.7.0, release 
note [https://github.com/fabric8io/kubernetes-client/issues/1953].
 # It provided better support for K8s 1.17, release note: 
[https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0].

For more release notes, please refer to [fabric8 
releases|https://github.com/fabric8io/kubernetes-client/releases].

  was:
Currently, we are using a version of 4.5.2, it's better that we upgrade it to 
4.9.1, some of the reasons are as follows:
# It removed the use of reapers manually doing cascade deletion of resources, 
leave it up to Kubernetes APIServer, which solves the issue of FLINK-17566, 
more info:  https://github.com/fabric8io/kubernetes-client/issues/1880
# It introduced a regression in building Quantity values in 4.7.0, release note 
https://github.com/fabric8io/kubernetes-client/issues/1953.
# It provided better support for K8s 1.17, release note: 
https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0.

For more release notes, please refer to [fabric8 
releases|https://github.com/fabric8io/kubernetes-client/releases].


> Bump fabric8 version from 4.5.2 to 4.9.2
> 
>
> Key: FLINK-17565
> URL: https://issues.apache.org/jira/browse/FLINK-17565
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently, we are using a version of 4.5.2, it's better that we upgrade it to 
> 4.9.2, some of the reasons are as follows:
>  # It removed the use of reapers manually doing cascade deletion of 
> resources, leave it up to Kubernetes APIServer, which solves the issue of 
> FLINK-17566, more info: 
> [https://github.com/fabric8io/kubernetes-client/issues/1880]
>  # It introduced a regression in building Quantity values in 4.7.0, release 
> note [https://github.com/fabric8io/kubernetes-client/issues/1953].
>  # It provided better support for K8s 1.17, release note: 
> [https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0].
> For more release notes, please refer to [fabric8 
> releases|https://github.com/fabric8io/kubernetes-client/releases].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12240: [FLINK-15792][k8s] Make Flink logs accessible via kubectl logs per default

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12240:
URL: https://github.com/apache/flink/pull/12240#issuecomment-630661048


   
   ## CI report:
   
   * 7ae117dbf4d94f345f70d6f1e8cec97f71086a36 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1820)
 
   * fc462938ff28feca6fd689f6e51e1fca79efe975 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17565) Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-19 Thread Yang Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yang Wang updated FLINK-17565:
--
Summary: Bump fabric8 version from 4.5.2 to 4.9.2  (was: Bump fabric8 
version from 4.5.2 to 4.9.1)

> Bump fabric8 version from 4.5.2 to 4.9.2
> 
>
> Key: FLINK-17565
> URL: https://issues.apache.org/jira/browse/FLINK-17565
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Currently, we are using a version of 4.5.2, it's better that we upgrade it to 
> 4.9.1, some of the reasons are as follows:
> # It removed the use of reapers manually doing cascade deletion of resources, 
> leave it up to Kubernetes APIServer, which solves the issue of FLINK-17566, 
> more info:  https://github.com/fabric8io/kubernetes-client/issues/1880
> # It introduced a regression in building Quantity values in 4.7.0, release 
> note https://github.com/fabric8io/kubernetes-client/issues/1953.
> # It provided better support for K8s 1.17, release note: 
> https://github.com/fabric8io/kubernetes-client/releases/tag/v4.7.0.
> For more release notes, please refer to [fabric8 
> releases|https://github.com/fabric8io/kubernetes-client/releases].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17351) CheckpointCoordinator and CheckpointFailureManager ignores checkpoint timeouts

2020-05-19 Thread Yuan Mei (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111716#comment-17111716
 ] 

Yuan Mei commented on FLINK-17351:
--

 

Thanks for the pointers [~roman_khachatryan]. I have quite a nice walk ;)

I guess the fix is simple: increase `continuousFailureCounter` for exception 
`CHECKPOINT_EXPIRED` as well.

However, there is a list of checkpoint failure reasons listed (actually most of 
the reasons) are ignored.

Hence I am wondering what is the criteria for what should be ignored, and what 
should not?

> CheckpointCoordinator and CheckpointFailureManager ignores checkpoint timeouts
> --
>
> Key: FLINK-17351
> URL: https://issues.apache.org/jira/browse/FLINK-17351
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.9.2, 1.10.0
>Reporter: Piotr Nowojski
>Priority: Critical
> Fix For: 1.11.0
>
>
> As described in point 2: 
> https://issues.apache.org/jira/browse/FLINK-17327?focusedCommentId=17090576&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17090576
> (copy of description from above linked comment):
> The logic in how {{CheckpointCoordinator}} handles checkpoint timeouts is 
> broken. In your [~qinjunjerry] examples, your job should have failed after 
> first checkpoint failure, but checkpoints were time outing on 
> CheckpointCoordinator after 5 seconds, before {{FlinkKafkaProducer}} was 
> detecting Kafka failure after 2 minutes. Those timeouts were not checked 
> against {{setTolerableCheckpointFailureNumber(...)}} limit, so the job was 
> keep going with many timed out checkpoints. Now funny thing happens: 
> FlinkKafkaProducer detects Kafka failure. Funny thing is that it depends 
> where the failure was detected:
> a) on processing record? no problem, job will failover immediately once 
> failure is detected (in this example after 2 minutes)
> b) on checkpoint? heh, the failure is reported to {{CheckpointCoordinator}} 
> *and gets ignored, as PendingCheckpoint has already been discarded 2 minutes 
> ago* :) So theoretically the checkpoints can keep failing forever and the job 
> will not restart automatically, unless something else fails.
> Even more funny things can happen if we mix FLINK-17350 . or b) with 
> intermittent external system failure. Sink reports an exception, transaction 
> was lost/aborted, Sink is in failed state, but if there will be a happy 
> coincidence that it manages to accept further records, this exception can be 
> lost and all of the records in those failed checkpoints will be lost forever 
> as well. In all of the examples that [~qinjunjerry] posted it hasn't 
> happened. {{FlinkKafkaProducer}} was not able to recover after the initial 
> failure and it was keep throwing exceptions until the job finally failed (but 
> much later then it should have). And that's not guaranteed anywhere.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wangyang0918 commented on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-19 Thread GitBox


wangyang0918 commented on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-631210861


   @zhengcanbin Thanks a lot for creating this PR. I am afraid this PR could 
not work because the new version introduce some additional dependencies(e.g. 
`com.fasterxml.jackson.datatype:jackson-datatype-jsr310`). Could you please 
check for that?
   
   ```
   2020-05-18 14:22:19,882 INFO  
org.apache.flink.client.deployment.DefaultClusterClientServiceLoader [] - Could 
not load factory due to missing dependencies.
   Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/flink/kubernetes/shaded/com/fasterxml/jackson/datatype/jsr310/JavaTimeModule
at 
io.fabric8.kubernetes.client.internal.KubeConfigUtils.parseConfigFromString(KubeConfigUtils.java:44)
at 
io.fabric8.kubernetes.client.Config.loadFromKubeconfig(Config.java:505)
at io.fabric8.kubernetes.client.Config.tryKubeConfig(Config.java:491)
at io.fabric8.kubernetes.client.Config.autoConfigure(Config.java:230)
at io.fabric8.kubernetes.client.Config.(Config.java:214)
at io.fabric8.kubernetes.client.Config.autoConfigure(Config.java:225)
at 
org.apache.flink.kubernetes.kubeclient.KubeClientFactory.fromConfiguration(KubeClientFactory.java:69)
at 
org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:58)
at 
org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:39)
at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.run(KubernetesSessionCli.java:95)
at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.lambda$main$0(KubernetesSessionCli.java:185)
at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.main(KubernetesSessionCli.java:185)
   Caused by: java.lang.ClassNotFoundException: 
org.apache.flink.kubernetes.shaded.com.fasterxml.jackson.datatype.jsr310.JavaTimeModule
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 13 more
   
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wangyang0918 edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-19 Thread GitBox


wangyang0918 edited a comment on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-631210861


   @zhengcanbin Thanks a lot for creating this PR. I am afraid this PR could 
not work because the new `kubernetes-client` version introduce some additional 
dependencies(e.g. `com.fasterxml.jackson.datatype:jackson-datatype-jsr310`). 
Could you please check for that?
   
   ```
   2020-05-18 14:22:19,882 INFO  
org.apache.flink.client.deployment.DefaultClusterClientServiceLoader [] - Could 
not load factory due to missing dependencies.
   Exception in thread "main" java.lang.NoClassDefFoundError: 
org/apache/flink/kubernetes/shaded/com/fasterxml/jackson/datatype/jsr310/JavaTimeModule
at 
io.fabric8.kubernetes.client.internal.KubeConfigUtils.parseConfigFromString(KubeConfigUtils.java:44)
at 
io.fabric8.kubernetes.client.Config.loadFromKubeconfig(Config.java:505)
at io.fabric8.kubernetes.client.Config.tryKubeConfig(Config.java:491)
at io.fabric8.kubernetes.client.Config.autoConfigure(Config.java:230)
at io.fabric8.kubernetes.client.Config.(Config.java:214)
at io.fabric8.kubernetes.client.Config.autoConfigure(Config.java:225)
at 
org.apache.flink.kubernetes.kubeclient.KubeClientFactory.fromConfiguration(KubeClientFactory.java:69)
at 
org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:58)
at 
org.apache.flink.kubernetes.KubernetesClusterClientFactory.createClusterDescriptor(KubernetesClusterClientFactory.java:39)
at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.run(KubernetesSessionCli.java:95)
at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.lambda$main$0(KubernetesSessionCli.java:185)
at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at 
org.apache.flink.kubernetes.cli.KubernetesSessionCli.main(KubernetesSessionCli.java:185)
   Caused by: java.lang.ClassNotFoundException: 
org.apache.flink.kubernetes.shaded.com.fasterxml.jackson.datatype.jsr310.JavaTimeModule
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 13 more
   
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on a change in pull request #12230: [FLINK-17504][docs] Update Chinese translation of Getting Started / O…

2020-05-19 Thread GitBox


klion26 commented on a change in pull request #12230:
URL: https://github.com/apache/flink/pull/12230#discussion_r427718620



##
File path: docs/getting-started/index.zh.md
##
@@ -27,54 +27,37 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-There are many ways to get started with Apache Flink. Which one is the best for
-you depends on your goals and prior experience:
+上手使用 Apache Flink 有很多方式,哪一个最适合你取决于你的目标和以前的经验。
 
-* take a look at the **Docker Playgrounds** if you want to see what Flink can 
do, via a hands-on,
-  docker-based introduction to specific Flink concepts
-* explore one of the **Code Walkthroughs** if you want a quick, end-to-end
-  introduction to one of Flink's APIs
-* work your way through the **Hands-on Training** for a comprehensive,
-  step-by-step introduction to Flink
-* use **Project Setup** if you already know the basics of Flink and want a
-  project template for Java or Scala, or need help setting up the dependencies
+* 通过阅读 **Docker Playgrounds** 小节中基于 Docker 的 Flink 实践来了解 Flink 的基本概念和功能。
+* 可以通过 **Code Walkthroughs** 小节快速了解 Flink API。
+* 可以通过 **Hands-on Training** 章节逐步全面的学习 Flink。
+* 如果你已经了解 Flink 的基本概念并且想构建 Flink 项目,可以通过**项目构建设置**小节获取 Java/Scala 的项目模板或项目依赖。
 
-### Taking a first look at Flink
+### 初识 Flink
 
-The **Docker Playgrounds** provide sandboxed Flink environments that are set 
up in just a few minutes and which allow you to explore and play with Flink.
+通过 **Docker Playgrounds** 提供沙箱的Flink环境,你只需花几分钟做些简单设置,就可以开始探索和使用 Flink。
 
-* The [**Operations Playground**]({% link 
getting-started/docker-playgrounds/flink-operations-playground.md %}) shows you 
how to operate streaming applications with Flink. You can experience how Flink 
recovers application from failures, upgrade and scale streaming applications up 
and down, and query application metrics.
+* [**Flink Operations 
Playground**](./docker-playgrounds/flink-operations-playground.html) 向你展示如何使用 
Flink 编写数据流应用程序。你可以体验 Flink 如何从故障中恢复应用程序,升级、提高并行度、降低并行度和监控运行的状态指标等特性。
 
 
 
-### First steps with one of Flink's APIs
+### Flink API 入门
 
-The **Code Walkthroughs** are a great way to get started quickly with a 
step-by-step introduction to
-one of Flink's APIs. Each walkthrough provides instructions for bootstrapping 
a small skeleton
-project, and then shows how to extend it to a simple application.
+**代码练习**是快速入门的最佳方式,通过代码练习可以逐步深入地理解 Flink API。每个示例都演示了如何构建基础的 Flink 
代码框架,并如何逐步将其扩展为简单的应用程序。
 
-* The [**DataStream API**  code walkthrough]({% link 
getting-started/walkthroughs/datastream_api.md %}) shows how
-  to implement a simple DataStream application and how to extend it to be 
stateful and use timers.
-  The DataStream API is Flink's main abstraction for implementing stateful 
streaming applications
-  with sophisticated time semantics in Java or Scala.
+
+* [**DataStream API 示例**](./walkthroughs/datastream_api.html) 展示了如何实现一个基本的 
DataStream 应用程序,并把它扩展成有状态的应用程序。DataStream API 是 Flink 的主要抽象,可用于在 Java 或 Scala 
语言中实现具有复杂时间语义的有状态数据流处理的应用程序。
 
-* Flink's **Table API** is a relational API used for writing SQL-like queries 
in Java, Scala, or
-  Python, which are then automatically optimized, and can be executed on batch 
or streaming data
-  with identical syntax and semantics. The [Table API code walkthrough for 
Java and Scala]({% link
-  getting-started/walkthroughs/table_api.md %}) shows how to implement a 
simple Table API query on a
-  batch source and how to evolve it into a continuous query on a streaming 
source. There's also a
-  similar [code walkthrough for the Python Table API]({% link
-  getting-started/walkthroughs/python_table_api.md %}).
+* **Table API** 是 Flink 的语言嵌入式关系 API,用于在 Java,Scala 或 Python 中编写类 SQL 
的查询,并且这些查询会自动进行优化。Table API 查询可以使用一致的语法和语义同时在批处理或流数据上运行。[Table API code 
walkthrough for Java and Scala](./walkthroughs/table_api.html) 演示了如何在批处理中简单的使用 
Table API 进行查询,以及如何将其扩展为流处理中的查询。Python Table API 同上 [code walkthrough for the 
Python Table API](./walkthroughs/python_table_api.html)。
 
-### Taking a Deep Dive with the Hands-on Training
+### 通过实操进一步探索 Flink
 
-The [**Hands-on Training**]({% link training/index.md %}) is a self-paced 
training course with
-a set of lessons and hands-on exercises. This step-by-step introduction to 
Flink focuses
-on learning how to use the DataStream API to meet the needs of common, 
real-world use cases,
-and provides a complete introduction to the fundamental concepts: parallel 
dataflows,
-stateful stream processing, event time and watermarking, and fault tolerance 
via state snapshots.
+[Hands-on Training](/zh/training/index.html) 是一系列可供自主学习的练习课程。这些课程会循序渐进的介绍 
Flink,包括如何使用 DataStream API 来满足常见的、真实的需求场景,并提供对 Flink 中并行数据流(parallel 
dataflows)、有状态流式处理(stateful stream processing)、Event 
Time、Watermarking、通过状态快照实现容错(fault tolerance via state snapshots)等基本概念的完整介绍。

Review comment:
   为什么要修改这个 外链 的形式呢?

##
File path: docs/getting-started/index.zh.md
##
@@ -27,54 +27,37 @@ specific language governing

[jira] [Commented] (FLINK-17821) Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP

2020-05-19 Thread Lijie Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111711#comment-17111711
 ] 

Lijie Wang commented on FLINK-17821:


Dose this duplicate with https://issues.apache.org/jira/browse/FLINK-12030 ?

> Kafka010TableITCase>KafkaTableTestBase.testKafkaSourceSink failed on AZP
> 
>
> Key: FLINK-17821
> URL: https://issues.apache.org/jira/browse/FLINK-17821
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka
>Affects Versions: 1.12.0
>Reporter: Zhu Zhu
>Priority: Critical
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1871&view=logs&j=d44f43ce-542c-597d-bf94-b0718c71e5e8&t=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8&l=12032
> 2020-05-19T16:29:40.7239430Z Test testKafkaSourceSink[legacy = false, topicId 
> = 1](org.apache.flink.streaming.connectors.kafka.table.Kafka010TableITCase) 
> failed with:
> 2020-05-19T16:29:40.7240291Z java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-19T16:29:40.7241033Z  at 
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> 2020-05-19T16:29:40.7241542Z  at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
> 2020-05-19T16:29:40.7242127Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil$.execInsertSqlAndWaitResult(TableEnvUtil.scala:31)
> 2020-05-19T16:29:40.7242729Z  at 
> org.apache.flink.table.planner.runtime.utils.TableEnvUtil.execInsertSqlAndWaitResult(TableEnvUtil.scala)
> 2020-05-19T16:29:40.7243239Z  at 
> org.apache.flink.streaming.connectors.kafka.table.KafkaTableTestBase.testKafkaSourceSink(KafkaTableTestBase.java:145)
> 2020-05-19T16:29:40.7243691Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-19T16:29:40.7244273Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-19T16:29:40.7244729Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-19T16:29:40.7245117Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-19T16:29:40.7245515Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-19T16:29:40.7245956Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-19T16:29:40.7246419Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-19T16:29:40.7246870Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-19T16:29:40.7247287Z  at 
> org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
> 2020-05-19T16:29:40.7251320Z  at 
> org.junit.rules.RunRules.evaluate(RunRules.java:20)
> 2020-05-19T16:29:40.7251833Z  at 
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> 2020-05-19T16:29:40.7252251Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> 2020-05-19T16:29:40.7252716Z  at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> 2020-05-19T16:29:40.7253117Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T16:29:40.7253502Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T16:29:40.7254041Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T16:29:40.7254528Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-19T16:29:40.7255500Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-19T16:29:40.7256064Z  at 
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> 2020-05-19T16:29:40.7256438Z  at 
> org.junit.runners.Suite.runChild(Suite.java:128)
> 2020-05-19T16:29:40.7256758Z  at 
> org.junit.runners.Suite.runChild(Suite.java:27)
> 2020-05-19T16:29:40.7257118Z  at 
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> 2020-05-19T16:29:40.7257486Z  at 
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> 2020-05-19T16:29:40.7257885Z  at 
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> 2020-05-19T16:29:40.7258389Z  at 
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> 2020-05-19T16:29:40.7258821Z  at 
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> 2020-05-19T16:29:40.7259219Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-19T16:29:40.7259664Z  at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
> 2020-05-19T16:29:40.7260098Z  at 
> org.junit.rules.ExternalResource$1.evaluate(Extern

[GitHub] [flink] flinkbot edited a comment on pull request #12215: [FLINK-17565][k8s] Bump fabric8 version from 4.5.2 to 4.9.2

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12215:
URL: https://github.com/apache/flink/pull/12215#issuecomment-630047332


   
   ## CI report:
   
   * 5f357acabcb13d64d8e9a042af14329415db0f87 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1708)
 
   * 906be78b0943a61b70d4624b95bad5479c9f3d92 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1896)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12259: [hotfix][k8s] Remove unused constant variable

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #12259:
URL: https://github.com/apache/flink/pull/12259#issuecomment-631191345


   
   ## CI report:
   
   * 1ee1aadd85244dccac74b71c63f21379195b112b Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1897)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11

2020-05-19 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111709#comment-17111709
 ] 

Dian Fu edited comment on FLINK-17822 at 5/20/20, 3:08 AM:
---

There are several Java 11 tests in the same cron job failed with the same 
exception: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887&view=logs&j=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f&t=d4549d78-6fab-5c0c-bdb9-abaafb66ea8b


was (Author: dian.fu):
It seems that all the Java 11 tests in the same cron job failed with this 
exception: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887&view=logs&j=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f&t=d4549d78-6fab-5c0c-bdb9-abaafb66ea8b

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in Java 11 
> --
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor

[GitHub] [flink] flinkbot edited a comment on pull request #11175: [FLINK-16197][hive] Failed to query partitioned table when partition …

2020-05-19 Thread GitBox


flinkbot edited a comment on pull request #11175:
URL: https://github.com/apache/flink/pull/11175#issuecomment-589671100


   
   ## CI report:
   
   * f41f4359a68f8c9b85a33d3414bf346e02c17d6a Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1842)
 
   * 7cf8bc2371f60ce02daec08bda96b30e8ab94a32 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11

2020-05-19 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111709#comment-17111709
 ] 

Dian Fu commented on FLINK-17822:
-

It seems that all the Java 11 tests in the same cron job failed with this 
error: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887&view=logs&j=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f&t=d4549d78-6fab-5c0c-bdb9-abaafb66ea8b

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in Java 11 
> --
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8846842Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8847711Z  

[jira] [Comment Edited] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11

2020-05-19 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17111709#comment-17111709
 ] 

Dian Fu edited comment on FLINK-17822 at 5/20/20, 3:02 AM:
---

It seems that all the Java 11 tests in the same cron job failed with this 
exception: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887&view=logs&j=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f&t=d4549d78-6fab-5c0c-bdb9-abaafb66ea8b


was (Author: dian.fu):
It seems that all the Java 11 tests in the same cron job failed with this 
error: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1887&view=logs&j=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f&t=d4549d78-6fab-5c0c-bdb9-abaafb66ea8b

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in Java 11 
> --
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor.slo

[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in Java 11

2020-05-19 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17822:

Summary: Nightly Flink CLI end-to-end test failed with 
"JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
jdk.internal.misc.SharedSecrets" in Java 11   (was: Nightly Flink CLI 
end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot 
access class jdk.internal.misc.SharedSecrets" in JDK 11 )

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in Java 11 
> --
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8846842Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2

[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11

2020-05-19 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17822:

Component/s: Runtime / Task

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in JDK 11 
> -
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8846842Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8847711Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8848295Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
> 2020-05-19T21:59:39.884

[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11

2020-05-19 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17822:

Component/s: Tests

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in JDK 11 
> -
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8846842Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8847711Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8848295Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
> 2020-05-19T21:59:39.88487

[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11

2020-05-19 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17822:

Affects Version/s: 1.11.0

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in JDK 11 
> -
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.11.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8846842Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8847711Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8848295Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
> 2020-05-19T21:59:39.8848732Z  at 
> jdk.internal.reflect.Native

[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11

2020-05-19 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17822:

Summary: Nightly Flink CLI end-to-end test failed with 
"JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
jdk.internal.misc.SharedSecrets" in JDK 11   (was: Flink CLI end-to-end test 
failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
jdk.internal.misc.SharedSecrets" in JDK 11 )

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in JDK 11 
> -
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>Reporter: Dian Fu
>Priority: Major
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8846842Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8847711Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967)
>  ~[f

[jira] [Updated] (FLINK-17822) Nightly Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11

2020-05-19 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17822:

Labels: test-stability  (was: )

> Nightly Flink CLI end-to-end test failed with 
> "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
> jdk.internal.misc.SharedSecrets" in JDK 11 
> -
>
> Key: FLINK-17822
> URL: https://issues.apache.org/jira/browse/FLINK-17822
> Project: Flink
>  Issue Type: Bug
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> Instance: 
> https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600
> {code}
> 2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
> org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
> UNEXPECTED - Failed to invoke waitForReferenceProcessing
> 2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot 
> access class jdk.internal.misc.SharedSecrets (in module java.base) because 
> module java.base does not export jdk.internal.misc to unnamed module @54e3658c
> 2020-05-19T21:59:39.8830707Z  at 
> jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361)
>  ~[?:?]
> 2020-05-19T21:59:39.8831166Z  at 
> java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) 
> ~[?:?]
> 2020-05-19T21:59:39.8831744Z  at 
> java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
> 2020-05-19T21:59:39.8832596Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8833667Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8834712Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8835686Z  at 
> org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8836652Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8838033Z  at 
> org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8839259Z  at 
> org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8840148Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841035Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8841603Z  at 
> java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
> ~[?:?]
> 2020-05-19T21:59:39.8842069Z  at 
> java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799)
>  ~[?:?]
> 2020-05-19T21:59:39.8842844Z  at 
> java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
> ~[?:?]
> 2020-05-19T21:59:39.8843828Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8844790Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8845754Z  at 
> org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8846842Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8847711Z  at 
> org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967)
>  ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> 2020-05-19T21:59:39.8848295Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
> 2020-05-19T21:59:39.8848732Z  at 
> jdk.internal.reflect.NativeMethodAccessorImpl.invok

[jira] [Created] (FLINK-17822) Flink CLI end-to-end test failed with "JavaGcCleanerWrapper$PendingCleanersRunner cannot access class jdk.internal.misc.SharedSecrets" in JDK 11

2020-05-19 Thread Dian Fu (Jira)
Dian Fu created FLINK-17822:
---

 Summary: Flink CLI end-to-end test failed with 
"JavaGcCleanerWrapper$PendingCleanersRunner cannot access class 
jdk.internal.misc.SharedSecrets" in JDK 11 
 Key: FLINK-17822
 URL: https://issues.apache.org/jira/browse/FLINK-17822
 Project: Flink
  Issue Type: Bug
Reporter: Dian Fu


Instance: 
https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1887/logs/600

{code}
2020-05-19T21:59:39.8829043Z 2020-05-19 21:59:25,193 ERROR 
org.apache.flink.util.JavaGcCleanerWrapper   [] - FATAL 
UNEXPECTED - Failed to invoke waitForReferenceProcessing
2020-05-19T21:59:39.8829849Z java.lang.IllegalAccessException: class 
org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner cannot access 
class jdk.internal.misc.SharedSecrets (in module java.base) because module 
java.base does not export jdk.internal.misc to unnamed module @54e3658c
2020-05-19T21:59:39.8830707Zat 
jdk.internal.reflect.Reflection.newIllegalAccessException(Reflection.java:361) 
~[?:?]
2020-05-19T21:59:39.8831166Zat 
java.lang.reflect.AccessibleObject.checkAccess(AccessibleObject.java:591) ~[?:?]
2020-05-19T21:59:39.8831744Zat 
java.lang.reflect.Method.invoke(Method.java:558) ~[?:?]
2020-05-19T21:59:39.8832596Zat 
org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.getJavaLangRefAccess(JavaGcCleanerWrapper.java:362)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8833667Zat 
org.apache.flink.util.JavaGcCleanerWrapper$PendingCleanersRunner.tryRunPendingCleaners(JavaGcCleanerWrapper.java:351)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8834712Zat 
org.apache.flink.util.JavaGcCleanerWrapper$CleanerManager.tryRunPendingCleaners(JavaGcCleanerWrapper.java:207)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8835686Zat 
org.apache.flink.util.JavaGcCleanerWrapper.tryRunPendingCleaners(JavaGcCleanerWrapper.java:158)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8836652Zat 
org.apache.flink.runtime.memory.UnsafeMemoryBudget.reserveMemory(UnsafeMemoryBudget.java:94)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8838033Zat 
org.apache.flink.runtime.memory.UnsafeMemoryBudget.verifyEmpty(UnsafeMemoryBudget.java:64)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8839259Zat 
org.apache.flink.runtime.memory.MemoryManager.verifyEmpty(MemoryManager.java:172)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8840148Zat 
org.apache.flink.runtime.taskexecutor.slot.TaskSlot.verifyMemoryFreed(TaskSlot.java:311)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8841035Zat 
org.apache.flink.runtime.taskexecutor.slot.TaskSlot.lambda$closeAsync$1(TaskSlot.java:301)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8841603Zat 
java.util.concurrent.CompletableFuture.uniRunNow(CompletableFuture.java:815) 
~[?:?]
2020-05-19T21:59:39.8842069Zat 
java.util.concurrent.CompletableFuture.uniRunStage(CompletableFuture.java:799) 
~[?:?]
2020-05-19T21:59:39.8842844Zat 
java.util.concurrent.CompletableFuture.thenRun(CompletableFuture.java:2121) 
~[?:?]
2020-05-19T21:59:39.8843828Zat 
org.apache.flink.runtime.taskexecutor.slot.TaskSlot.closeAsync(TaskSlot.java:300)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8844790Zat 
org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlotInternal(TaskSlotTableImpl.java:404)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8845754Zat 
org.apache.flink.runtime.taskexecutor.slot.TaskSlotTableImpl.freeSlot(TaskSlotTableImpl.java:365)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8846842Zat 
org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlotInternal(TaskExecutor.java:1589)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8847711Zat 
org.apache.flink.runtime.taskexecutor.TaskExecutor.freeSlot(TaskExecutor.java:967)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8848295Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
2020-05-19T21:59:39.8848732Zat 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 ~[?:?]
2020-05-19T21:59:39.8849228Zat 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
2020-05-19T21:59:39.8849669Zat 
java.lang.reflect.Method.invoke(Method.java:566) ~[?:?]
2020-05-19T21:59:39.8850656Zat 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)
 ~[flink-dist_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
2020-05-19T21:59:39.8851589Zat

  1   2   3   4   5   6   7   >