[GitHub] [flink] flinkbot edited a comment on issue #9242: [FLINK-13408][runtime] Schedule StandaloneResourceManager.setFailUnfulfillableRequest whenever the leadership is acquired

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9242: [FLINK-13408][runtime] Schedule 
StandaloneResourceManager.setFailUnfulfillableRequest whenever the leadership 
is acquired
URL: https://github.com/apache/flink/pull/9242#issuecomment-515479114
 
 
   ## CI report:
   
   * bc4bc3a00e1692381368af36b87d233677e8c4ac : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120867383)
   * dd5aa085e80d3bc785de7e4e43690687f0f27974 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120968774)
   * d2f527bfe79d8e25cbaf88aa13330d968364b633 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121115520)
   * 7a1d13ea0fc924b1f38681153b73ed27e754b9f4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121169984)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12942) Translate "Elasticsearch Connector" page into Chinese

2019-07-29 Thread ChaojianZhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895691#comment-16895691
 ] 

ChaojianZhang commented on FLINK-12942:
---

Hi [~jark], I want to take this translation work, can you assign it to me?

> Translate "Elasticsearch Connector" page into Chinese
> -
>
> Key: FLINK-12942
> URL: https://issues.apache.org/jira/browse/FLINK-12942
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Forward Xu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/elasticsearch.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/elasticsearch.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Comment Edited] (FLINK-12942) Translate "Elasticsearch Connector" page into Chinese

2019-07-29 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895749#comment-16895749
 ] 

Jark Wu edited comment on FLINK-12942 at 7/30/19 3:58 AM:
--

Hi [~Chaojian], [~x1q1j1] has already provide a pull request for this and we 
are working on reviewing. 
Thanks for joining the community, you can look for some other un-assigned 
translate tickets. 


was (Author: jark):
Hi [~Chaojian], [~x1q1j1] has already provide a pull request for this and we 
are working on reviewing. 
Thanks for joining the community, you can looking for some other un-assigned 
translate tickets. 

> Translate "Elasticsearch Connector" page into Chinese
> -
>
> Key: FLINK-12942
> URL: https://issues.apache.org/jira/browse/FLINK-12942
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Forward Xu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/elasticsearch.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/elasticsearch.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-12942) Translate "Elasticsearch Connector" page into Chinese

2019-07-29 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895749#comment-16895749
 ] 

Jark Wu commented on FLINK-12942:
-

Hi [~Chaojian], [~x1q1j1] has already provide a pull request for this and we 
are working on reviewing. 
Thanks for joining the community, you can looking for some other un-assigned 
translate tickets. 

> Translate "Elasticsearch Connector" page into Chinese
> -
>
> Key: FLINK-12942
> URL: https://issues.apache.org/jira/browse/FLINK-12942
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Forward Xu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Translate the internal page 
> "https://ci.apache.org/projects/flink/flink-docs-master/dev/connectors/elasticsearch.html;
>  into Chinese.
>  
> The doc located in "flink/docs/dev/connectors/elasticsearch.zh.md"



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9267: Merge pull request #3 from apache/master

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9267: Merge pull request #3 from 
apache/master
URL: https://github.com/apache/flink/pull/9267#issuecomment-516246008
 
 
   ## CI report:
   
   * 4c00f76596da5e258c2b642525505cfd8327af9d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121185985)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308532396
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - DataStream Example Walkthrough

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - 
DataStream Example Walkthrough
URL: https://github.com/apache/flink/pull/9210#issuecomment-514437706
 
 
   ## CI report:
   
   * 5eb979da047c442c0205464c92b5bd9ee3a740dc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120299964)
   * d7bf53a30514664925357bd5817305a02553d0a3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120506936)
   * 02cca7fb6283b84a20ee019159ccb023ccffbd82 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120769129)
   * 5009b10d38eef92f25bfe4ff4608f2dd121ea9c6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120915709)
   * e3b272586d8f41d3800e86134730c4dc427952a6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120916220)
   * f1aee543a7aef88e3cf052f4d686ab0a8e5938e5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120996260)
   * c66060dba290844085f90f554d447c6d7033779d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121131224)
   * 700e5c19a3d49197ef2b18a646f0b6e1bf783ba8 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121174288)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9237: [FLINK-13431][hive] NameNode HA 
configuration was not loaded when running HiveConnector on Yarn
URL: https://github.com/apache/flink/pull/9237#issuecomment-515414321
 
 
   ## CI report:
   
   * 2c59c8b33bcbc200978ed7b5ad27311ada599aab : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120842331)
   * 61008edcd78722ccd9ed143a2bd005ca91ee39b4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121050765)
   * 221bf689e1968f83bc99d862b3522a9ad7d06829 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121178448)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13475) Reduce dependency on third-party maven repositories

2019-07-29 Thread Terry Wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895724#comment-16895724
 ] 

Terry Wang commented on FLINK-13475:


I looked through this problem and found javax.jms:jms:jar:1.1 can be founded in 
 [https://mvnrepository.com/artifact/javax.jms/jms] .

javax.jms:jms:jar:1.1 is imported by HiveRunner with test scope while another 
org.pentaho:pentaho-aggdesigner-algorithm artifacted is not imported. I upload 
a dependency tree file in the attachments.

[~MartijnVisser] can you provide an environment reproducing compile problem? 

> Reduce dependency on third-party maven repositories
> ---
>
> Key: FLINK-13475
> URL: https://issues.apache.org/jira/browse/FLINK-13475
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Critical
> Fix For: 1.9.0, 1.10.0
>
> Attachments: flink-connector-hive-dependency.txt
>
>
> A user reported that Flink's Hive connectors requires third-party maven 
> repositories which are not everywhere accessible in order to build. 
> Concretely, the hive connector requires access to Conjars for 
> org.pentaho:pentaho-aggdesigner-algorithm and javax.jms:jms:jar:1.1.
> It would be great to reduce the dependency on third-party maven repositories 
> if possible. For future reference, other projects faced similar problems: 
> CALCITE-605, CALCITE-1474



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] hongtao12310 opened a new pull request #9267: Merge pull request #3 from apache/master

2019-07-29 Thread GitBox
hongtao12310 opened a new pull request #9267: Merge pull request #3 from 
apache/master
URL: https://github.com/apache/flink/pull/9267
 
 
   merge master branch
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (yes / no / don't know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13481) allow user launch job on yarn from SQL Client command line

2019-07-29 Thread Hongtao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895729#comment-16895729
 ] 

Hongtao Zhang commented on FLINK-13481:
---

I had identified this issue and working on it

> allow user launch job on yarn from SQL Client command line
> --
>
> Key: FLINK-13481
> URL: https://issues.apache.org/jira/browse/FLINK-13481
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: Flink 1.10
> CDH 5.13.3
>  
>  
>Reporter: Hongtao Zhang
>Priority: Critical
> Fix For: 1.10.0
>
>
> Flink SQL Client active command line doesn't load the FlinkYarnSessionCli 
> general options
> the general options contains "addressOption" which user can specify 
> --jobmanager="yarn-cluster" or -m to run the SQL on YARN Cluster
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13482) How can I cleanly shutdown streaming jobs in local mode?

2019-07-29 Thread Donghui Xu (JIRA)
Donghui Xu created FLINK-13482:
--

 Summary: How can I cleanly shutdown streaming jobs in local mode?
 Key: FLINK-13482
 URL: https://issues.apache.org/jira/browse/FLINK-13482
 Project: Flink
  Issue Type: Improvement
Reporter: Donghui Xu


Currently, streaming jobs can be stopped using "cancel" and "stop" command only 
in cluster mode, not in local mode.
When users need to explicitly terminate jobs, it is necessary to provide a 
termination mechanism for local mode flow jobs.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on issue #9203: [FLINK-13375][table-api] Improve config names in ExecutionConfigOptions and OptimizerConfigOptions

2019-07-29 Thread GitBox
wuchong commented on issue #9203: [FLINK-13375][table-api] Improve config names 
in ExecutionConfigOptions and OptimizerConfigOptions
URL: https://github.com/apache/flink/pull/9203#issuecomment-516249435
 
 
   Thanks for the reviewing @godfreyhe , I will modify them when merging 
because they are tiny changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13485) Translate "Table API Example Walkthrough" page into Chinese

2019-07-29 Thread Jark Wu (JIRA)
Jark Wu created FLINK-13485:
---

 Summary: Translate "Table API Example Walkthrough" page into 
Chinese
 Key: FLINK-13485
 URL: https://issues.apache.org/jira/browse/FLINK-13485
 Project: Flink
  Issue Type: Sub-task
  Components: chinese-translation, Documentation
Reporter: Jark Wu


FLINK-12747 has added a page to walkthrough Table API. We can translate it into 
Chinese now. 

The page is located in {{docs/getting-started/walkthroughs/table_api.zh.md}}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308526422
 
 

 ##
 File path: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/ClickEventCount.java
 ##
 @@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.examples.windowing.clickeventcount;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.streaming.api.TimeCharacteristic;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import 
org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;
+import org.apache.flink.streaming.api.windowing.time.Time;
+import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
+import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.functions.ClickEventStatisticsCollector;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.functions.CountingAggregator;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEvent;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventDeserializationSchema;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventStatistics;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventStatisticsSerializationSchema;
+
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * A simple streaming job reading {@link ClickEvent}s from Kafka, counting 
events per minute and
+ * writing the resulting {@link ClickEventStatistics} back to Kafka.
+ *
+ *  It can be run with or without checkpointing and with event time or 
processing time semantics.
+ * 
+ *
+ *
 
 Review comment:
   Good point. Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13484) ConnectedComponents end-to-end test instable with NoResourceAvailableException

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-13484:
---

 Summary: ConnectedComponents end-to-end test instable with 
NoResourceAvailableException
 Key: FLINK-13484
 URL: https://issues.apache.org/jira/browse/FLINK-13484
 Project: Flink
  Issue Type: Bug
  Components: Test Infrastructure
Reporter: Tzu-Li (Gordon) Tai


The {{ConnectedComponents iterations with high parallelism}} e2e test seems to 
fail sporadically with {{NoResourceAvailableException}}.

https://api.travis-ci.org/v3/job/564894454/log.txt

{code}
2019-07-29 18:10:37,294 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph- Map (Map at 
main(HighParallelismIterationsTestProgram.java:50)) (9/25) 
(84f306767dabaa104d215bb429797833) switched from SCHEDULED to FAILED.
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: 
Could not allocate enough slots to run the job. Please make sure that the 
cluster has enough resources.
at 
org.apache.flink.runtime.executiongraph.Execution.lambda$scheduleForExecution$0(Execution.java:459)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.runtime.jobmaster.slotpool.SchedulerImpl.lambda$internalAllocateSlot$0(SchedulerImpl.java:190)
at 
java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
at 
java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$SingleTaskSlot.release(SlotSharingManager.java:694)
at 
org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.release(SlotSharingManager.java:482)
at 
org.apache.flink.runtime.jobmaster.slotpool.SlotSharingManager$MultiTaskSlot.lambda$new$0(SlotSharingManager.java:378)
at 
java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:822)
at 
java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:797)
at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
at 
org.apache.flink.runtime.concurrent.FutureUtils$Timeout.run(FutureUtils.java:943)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:123)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:122)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:172)
at akka.actor.Actor.aroundReceive(Actor.scala:517)
at akka.actor.Actor.aroundReceive$(Actor.scala:515)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at 
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at 
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
2019-07-29 18:10:37,299 INFO  
org.apache.flink.runtime.executiongraph.failover.AdaptedRestartPipelinedRegionStrategyNG
  - Fail to pass the restart strategy validation in region failover. Fallback 
to fail global.

[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308534463
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308534528
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308534887
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] dawidwys commented on a change in pull request #9261: [FLINK-13399][table] Create two separate table uber jars for old and blink planners

2019-07-29 Thread GitBox
dawidwys commented on a change in pull request #9261:  [FLINK-13399][table] 
Create two separate table uber jars for old and blink planners
URL: https://github.com/apache/flink/pull/9261#discussion_r308544193
 
 

 ##
 File path: docs/dev/table/index.md
 ##
 @@ -44,33 +44,48 @@ The following dependencies are relevant for most projects:
 * `flink-table-api-java-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Java programming language.
 * `flink-table-api-scala-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Scala programming language.
 * `flink-table-planner`: The table program planner and runtime.
-* `flink-table-uber`: Packages the modules above into a distribution for most 
Table & SQL API use cases. The uber JAR file `flink-table*.jar` is located in 
the `/opt` directory of a Flink release and can be moved to `/lib` if desired.
+* `flink-table-planner-blink`: The table program blink planner.
+* `flink-table-runtime-blink`: The table program blink runtime.
+* `flink-table-uber`: Packages the common modules above plus the current 
planner into a distribution for most Table & SQL API use cases. The uber JAR 
file `flink-table*.jar` is by default located in the `/lib` directory of a 
Flink release.
 
 Review comment:
   I renamed it delta the fact that when copying the uber jar we are stripping 
the `uber` suffix. This is the current behavior, which I adopted for the other 
uber jar as well.
   
   So I renamed it to `flink-table-VERSION.jar`


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9242: [FLINK-13408][runtime] Schedule StandaloneResourceManager.setFailUnfulfillableRequest whenever the leadership is acquired

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9242: [FLINK-13408][runtime] Schedule 
StandaloneResourceManager.setFailUnfulfillableRequest whenever the leadership 
is acquired
URL: https://github.com/apache/flink/pull/9242#issuecomment-515479114
 
 
   ## CI report:
   
   * bc4bc3a00e1692381368af36b87d233677e8c4ac : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120867383)
   * dd5aa085e80d3bc785de7e4e43690687f0f27974 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120968774)
   * d2f527bfe79d8e25cbaf88aa13330d968364b633 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121115520)
   * 7a1d13ea0fc924b1f38681153b73ed27e754b9f4 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121169984)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - DataStream Example Walkthrough

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - 
DataStream Example Walkthrough
URL: https://github.com/apache/flink/pull/9210#issuecomment-514437706
 
 
   ## CI report:
   
   * 5eb979da047c442c0205464c92b5bd9ee3a740dc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120299964)
   * d7bf53a30514664925357bd5817305a02553d0a3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120506936)
   * 02cca7fb6283b84a20ee019159ccb023ccffbd82 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120769129)
   * 5009b10d38eef92f25bfe4ff4608f2dd121ea9c6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120915709)
   * e3b272586d8f41d3800e86134730c4dc427952a6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120916220)
   * f1aee543a7aef88e3cf052f4d686ab0a8e5938e5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120996260)
   * c66060dba290844085f90f554d447c6d7033779d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121131224)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13480) Export SlotManager status at debug mode.

2019-07-29 Thread Guowei Ma (JIRA)
Guowei Ma created FLINK-13480:
-

 Summary: Export SlotManager status at debug mode.
 Key: FLINK-13480
 URL: https://issues.apache.org/jira/browse/FLINK-13480
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Coordination
Affects Versions: 1.9.0, 1.10
Reporter: Guowei Ma


It is very difficult to resolve some resource allocation issues, for example, 
FLINK-10819 .

One reason is that the status of slotmanager is very difficult to know.  I 
think we could save a lot of time when troubleshooting problems if the status 
of slotmanager can export to log 

So I propose to export the status of slotmanager when debug is open. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] wuchong commented on a change in pull request #9261: [FLINK-13399][table] Create two separate table uber jars for old and blink planners

2019-07-29 Thread GitBox
wuchong commented on a change in pull request #9261:  [FLINK-13399][table] 
Create two separate table uber jars for old and blink planners
URL: https://github.com/apache/flink/pull/9261#discussion_r308511511
 
 

 ##
 File path: docs/dev/table/index.md
 ##
 @@ -44,33 +44,48 @@ The following dependencies are relevant for most projects:
 * `flink-table-api-java-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Java programming language.
 * `flink-table-api-scala-bridge`: The Table & SQL API with DataStream/DataSet 
API support using the Scala programming language.
 * `flink-table-planner`: The table program planner and runtime.
-* `flink-table-uber`: Packages the modules above into a distribution for most 
Table & SQL API use cases. The uber JAR file `flink-table*.jar` is located in 
the `/opt` directory of a Flink release and can be moved to `/lib` if desired.
+* `flink-table-planner-blink`: The table program blink planner.
+* `flink-table-runtime-blink`: The table program blink runtime.
+* `flink-table-uber`: Packages the common modules above plus the current 
planner into a distribution for most Table & SQL API use cases. The uber JAR 
file `flink-table*.jar` is by default located in the `/lib` directory of a 
Flink release.
+* `flink-table-uber-blink`: Packages the common modules above plus the blink 
specific modules into a distribution for most Table & SQL API use cases. The 
uber JAR file `flink-table-blink*.jar` is by default located in the `/lib` 
directory of a Flink release.
 
 ### Table Program Dependencies
 
-The following dependencies must be added to a project in order to use the 
Table API & SQL for defining pipelines:
+Depending on the target programming language, you need to add the Java or 
Scala API to a project in order to use the Table API & SQL for defining 
pipelines:
 
 {% highlight xml %}
+
 
   org.apache.flink
-  flink-table-planner{{ site.scala_version_suffix }}
+  flink-table-api-java-bridge{{ site.scala_version_suffix 
}}
+  {{site.version}}
+
+
+
+  org.apache.flink
+  flink-table-api-scala-bridge{{ site.scala_version_suffix 
}}
   {{site.version}}
 
 {% endhighlight %}
 
-Additionally, depending on the target programming language, you need to add 
the Java or Scala API.
-
+Additionally, if you want to run the Table API & SQL programs locally you must 
add one of the
+following set of modules, depending which planner you want to use:
 {% highlight xml %}
 
 
   org.apache.flink
-  flink-table-api-java-bridge{{ site.scala_version_suffix 
}}
+  flink-table-planner{{ site.scala_version_suffix }}
   {{site.version}}
 
 
 
   org.apache.flink
-  flink-table-api-scala-bridge{{ site.scala_version_suffix 
}}
+  flink-table-planner-blink{{ site.scala_version_suffix 
}}
+  {{site.version}}
+
+
+  org.apache.flink
+  flink-table-runtime-blink{{ site.scala_version_suffix 
}}
 
 Review comment:
   `flink-table-planner-blink` will depend on `flink-table-runtime-blink`, I 
think we can omit `flink-table-runtime-blink` dependency here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13476) Release partitions for FINISHED or FAILED tasks if they are cancelled

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895774#comment-16895774
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-13476:
-

Since this is a blocker for 1.9.0, we should remove the 1.10.0 tag as fix 
version. I will remove it.

> Release partitions for FINISHED or FAILED tasks if they are cancelled
> -
>
> Key: FLINK-13476
> URL: https://issues.apache.org/jira/browse/FLINK-13476
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
>
> With FLINK-12615 we removed that partitions are being explicitly released 
> from the JM if an {{Execution}} which is in state {{FINISHED}} or {{FAILED}} 
> is being cancelled. In order to not have resource leak when using pipelined 
> result partitions whose consumers fail before start consuming, we should 
> re-introduce the deleted else branch (removed via 
> 408f6b67aefaccfc76708b2d4772eb7f0a8fd984).
> Once we properly wait that a {{Task}} does not finish until its produced 
> results have been either persisted or sent to a consumer, then we should be 
> able to remove this branch again.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308530880
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[jira] [Updated] (FLINK-13476) Release partitions for FINISHED or FAILED tasks if they are cancelled

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-13476:

Fix Version/s: (was: 1.10.0)

> Release partitions for FINISHED or FAILED tasks if they are cancelled
> -
>
> Key: FLINK-13476
> URL: https://issues.apache.org/jira/browse/FLINK-13476
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.9.0
>
>
> With FLINK-12615 we removed that partitions are being explicitly released 
> from the JM if an {{Execution}} which is in state {{FINISHED}} or {{FAILED}} 
> is being cancelled. In order to not have resource leak when using pipelined 
> result partitions whose consumers fail before start consuming, we should 
> re-introduce the deleted else branch (removed via 
> 408f6b67aefaccfc76708b2d4772eb7f0a8fd984).
> Once we properly wait that a {{Task}} does not finish until its produced 
> results have been either persisted or sent to a consumer, then we should be 
> able to remove this branch again.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308530911
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
 
 Review comment:
   That's more engaging. Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9192: [FLINK-12749] [docs] [examples] 
Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#issuecomment-513664474
 
 
   ## CI report:
   
   * 74a251ff6fa8c2ff9b13ae5869aacf90146024aa : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119964791)
   * 671dd6c48049ec526030cfc2b62b853c81ed01ab : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119965292)
   * 0505f7e4164015d4c604963787e6111fa55d5d9f : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121190039)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13476) Release partitions for FINISHED or FAILED tasks if they are cancelled

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895775#comment-16895775
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-13476:
-

Is this the cause of https://issues.apache.org/jira/browse/FLINK-13487?

> Release partitions for FINISHED or FAILED tasks if they are cancelled
> -
>
> Key: FLINK-13476
> URL: https://issues.apache.org/jira/browse/FLINK-13476
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Blocker
> Fix For: 1.9.0
>
>
> With FLINK-12615 we removed that partitions are being explicitly released 
> from the JM if an {{Execution}} which is in state {{FINISHED}} or {{FAILED}} 
> is being cancelled. In order to not have resource leak when using pipelined 
> result partitions whose consumers fail before start consuming, we should 
> re-introduce the deleted else branch (removed via 
> 408f6b67aefaccfc76708b2d4772eb7f0a8fd984).
> Once we properly wait that a {{Task}} does not finish until its produced 
> results have been either persisted or sent to a consumer, then we should be 
> able to remove this branch again.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308535192
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] flinkbot commented on issue #9269: [FLINK-9900][tests] Fix unstable ZooKeeperHighAvailabilityITCase

2019-07-29 Thread GitBox
flinkbot commented on issue #9269: [FLINK-9900][tests] Fix unstable 
ZooKeeperHighAvailabilityITCase
URL: https://github.com/apache/flink/pull/9269#issuecomment-516264551
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13490) Fix return null in JDBCUtils::getFieldFromResultSet

2019-07-29 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-13490:

Priority: Critical  (was: Major)

> Fix return null in JDBCUtils::getFieldFromResultSet
> ---
>
> Key: FLINK-13490
> URL: https://issues.apache.org/jira/browse/FLINK-13490
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / JDBC
>Reporter: Caizhi Weng
>Priority: Critical
> Fix For: 1.9.0, 1.10.0
>
>
> The null checking in `JDBCUtils::getFieldFromResultSet` is incorrect. Under 
> the current implementation, if one column is null in the result set, the 
> following calls to this method using the same result set will always return 
> null, no matter what the content of the column is.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Created] (FLINK-13490) Fix return null in JDBCUtils::getFieldFromResultSet

2019-07-29 Thread Caizhi Weng (JIRA)
Caizhi Weng created FLINK-13490:
---

 Summary: Fix return null in JDBCUtils::getFieldFromResultSet
 Key: FLINK-13490
 URL: https://issues.apache.org/jira/browse/FLINK-13490
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / JDBC
Reporter: Caizhi Weng
 Fix For: 1.9.0, 1.10.0


The null checking in `JDBCUtils::getFieldFromResultSet` is incorrect. Under the 
current implementation, if one column is null in the result set, the following 
calls to this method using the same result set will always return null, no 
matter what the content of the column is.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308543164
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
 
 Review comment:
   Good point. Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - Table API Example Walkthrough

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #8903: [FLINK-12747][docs] Getting Started - 
Table API Example Walkthrough
URL: https://github.com/apache/flink/pull/8903#issuecomment-510464651
 
 
   ## CI report:
   
   * b2821a6ae97fd943f3a66b672e85fbd2374126c4 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/118909729)
   * 0699f7e5f2240a4a1bc44c15f08e6a1df47d3b01 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119054579)
   * 509e634257496dd2d8d42d512901f5eb46a82c50 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119406891)
   * 7579df06b6a0bf799e8a9c2bcb09984bf52c8e8c : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/119441302)
   * ccb9dc29d4755d0a6c4596e08743b38615eb276a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120480063)
   * 1b976f30a689d9bdbf65513f034b2954bfb91468 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120494302)
   * 3ccee75dd0d506b90a2019cde9045eee26a4f4d5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120749125)
   * e07648c718b4ea32a3f02f826ca6a337400572be : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120769160)
   * cf082ff54f7bd160b9e0eb316459f419defdd0b7 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120872425)
   * b9b37f124c7c92c4d4b0c4e2101be33d9b86babd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120877385)
   * b09bc12655a6bfac8d3deb83dac24bf20b954423 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120879446)
   * dd3a34c416e972d9a63013c40a1452b82bd8423a : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121109594)
   * 3ec90b2709ae78d6a83fea8a491eb00157832764 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121124033)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12847) Update Kinesis Connectors to latest Apache licensed libraries

2019-07-29 Thread Nagesh Honnalli (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895640#comment-16895640
 ] 

Nagesh Honnalli commented on FLINK-12847:
-

Would the connector changes be pushed such that they can be used by older 
versions of Flink - say 1.6?

> Update Kinesis Connectors to latest Apache licensed libraries
> -
>
> Key: FLINK-12847
> URL: https://issues.apache.org/jira/browse/FLINK-12847
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Kinesis
>Reporter: Dyana Rose
>Assignee: Dyana Rose
>Priority: Major
>
> Currently the referenced Kinesis Client Library and Kinesis Producer Library 
> code in the flink-connector-kinesis is licensed under the Amazon Software 
> License which is not compatible with the Apache License. This then requires a 
> fair amount of work in the CI pipeline and for users who want to use the 
> flink-connector-kinesis.
> The Kinesis Client Library v2.x and the AWS Java SDK v2.x both are now on the 
> Apache 2.0 license.
> [https://github.com/awslabs/amazon-kinesis-client/blob/master/LICENSE.txt]
> [https://github.com/aws/aws-sdk-java-v2/blob/master/LICENSE.txt]
> There is a PR for the Kinesis Producer Library to update it to the Apache 2.0 
> license ([https://github.com/awslabs/amazon-kinesis-producer/pull/256])
> The task should include, but not limited to, upgrading KCL/KPL to new 
> versions of Apache 2.0 license, changing licenses and NOTICE files in 
> flink-connector-kinesis, and adding flink-connector-kinesis to build, CI and 
> artifact publishing pipeline, updating the build profiles, updating 
> documentation that references the license incompatibility
> The expected outcome of this issue is that the flink-connector-kinesis will 
> be included with the standard build artifacts and will no longer need to be 
> built separately by users.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13426) TaskExecutor uses the wrong Registrationid in the heartbeat with RM.

2019-07-29 Thread Guowei Ma (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895684#comment-16895684
 ] 

Guowei Ma commented on FLINK-13426:
---

TaskExecutor should use the taskExecutorRegistrationId after it successfully 

sendSlotReport the first time. Or it might use the old one in the heartbeat 
with ResourceManager.

Since that TaskExecutor should monitor the ResourceManager at 
slotReportResponseFuture complete callback.

 

> TaskExecutor uses the wrong Registrationid in the heartbeat with RM.
> 
>
> Key: FLINK-13426
> URL: https://issues.apache.org/jira/browse/FLINK-13426
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.8.1, 1.9.0
>Reporter: Guowei Ma
>Priority: Minor
> Attachments: image-2019-07-25-17-57-03-537.png
>
>
> 1. First-time TaskExecutor register to rm successfully. If it fails to send 
> SlotReport to SlotMaanger, TaskExecutor will reconnect to RM again. However, 
> TaskExecutor still uses the old registration id in the 
> EstablishedResourceManagerConnection.
> 2. Second-time TaskExecutor registers to rm successfully and gets a new 
> registration id.
> 3. First-round and second-round has a race condition. Since that the task 
> executor maybe use the old registration id in heartbeat with rm.
>  
> !image-2019-07-25-17-57-03-537.png!



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] lirui-apache commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats

2019-07-29 Thread GitBox
lirui-apache commented on a change in pull request #9264: [FLINK-13192][hive] 
Add tests for different Hive table formats
URL: https://github.com/apache/flink/pull/9264#discussion_r308514138
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/batch/connectors/hive/HiveTableOutputFormat.java
 ##
 @@ -256,10 +258,12 @@ public void configure(Configuration parameters) {
public void open(int taskNumber, int numTasks) throws IOException {
try {
StorageDescriptor sd = 
hiveTablePartition.getStorageDescriptor();
-   serializer = (AbstractSerDe) 
Class.forName(sd.getSerdeInfo().getSerializationLib()).newInstance();
+   serializer = (Serializer) 
Class.forName(sd.getSerdeInfo().getSerializationLib()).newInstance();
+   Preconditions.checkArgument(serializer instanceof 
Deserializer,
 
 Review comment:
   The problem is `SerDeUtils.initializeSerDe` requires a `Deserializer`. So we 
have to do the cast if we want to reuse this util method. Since most, if not 
all, SerDe lib implement both Serializer and Deserializer, I suppose this cast 
is OK?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation 
of Hive source/sink
URL: https://github.com/apache/flink/pull/9217#issuecomment-514589043
 
 
   ## CI report:
   
   * 516e655f7f0853d6585ae5de2fbecc438d57e474 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120432519)
   * fee6f2df235f113b7757ce436ee127711b0094e6 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121184693)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] lirui-apache commented on a change in pull request #9264: [FLINK-13192][hive] Add tests for different Hive table formats

2019-07-29 Thread GitBox
lirui-apache commented on a change in pull request #9264: [FLINK-13192][hive] 
Add tests for different Hive table formats
URL: https://github.com/apache/flink/pull/9264#discussion_r308516376
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/test/java/org/apache/flink/batch/connectors/hive/TableEnvHiveConnectorTest.java
 ##
 @@ -85,4 +87,40 @@ public void testDefaultPartitionName() throws Exception {
 
hiveShell.execute("drop database db1 cascade");
}
+
+   @Test
+   public void testDifferentFormats() throws Exception {
+   String[] formats = new String[]{"orc", "parquet", 
"sequencefile"};
 
 Review comment:
   I'll add test for csv. Since all other test cases are using text tables, I 
don't think we need to cover it here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13436) Add TPC-H queries as E2E tests

2019-07-29 Thread Jark Wu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895753#comment-16895753
 ] 

Jark Wu commented on FLINK-13436:
-

Great work [~carp84]! 

Just a double check. It seems that "io.airlift.tpch" doesn't have a LICENSE 
file or statement in the project. Even though I find Apache 2.0 license header 
in each source  code and an Apache 2.0 license in [maven 
repository|https://mvnrepository.com/artifact/io.airlift.tpch/tpch]. Is that 
enough for us?

> Add TPC-H queries as E2E tests
> --
>
> Key: FLINK-13436
> URL: https://issues.apache.org/jira/browse/FLINK-13436
> Project: Flink
>  Issue Type: Test
>  Components: Table SQL / Planner, Tests
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.9.0
>
>
> We should add the TPC-H queries as E2E tests in order to verify the blink 
> planner.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13381) BinaryHashTableTest and BinaryExternalSorterTest is crashed on Travis

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895755#comment-16895755
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-13381:
-

Another few instances:

https://api.travis-ci.org/v3/job/562437489/log.txt
https://api.travis-ci.org/v3/job/562437489/log.txt
https://api.travis-ci.org/v3/job/562380020/log.txt

But these have appeared for quite a while already, and I also haven't seen any 
so far in recent builds.

> BinaryHashTableTest and BinaryExternalSorterTest  is crashed on Travis
> --
>
> Key: FLINK-13381
> URL: https://issues.apache.org/jira/browse/FLINK-13381
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Jingsong Lee
>Priority: Critical
> Fix For: 1.9.0, 1.10.0
>
>
> Here is an instance of master: 
> https://api.travis-ci.org/v3/job/562437128/log.txt
> Here is an instance of 1.9: https://api.travis-ci.org/v3/job/562380020/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] asfgit closed pull request #9203: [FLINK-13375][table-api] Improve config names in ExecutionConfigOptions and OptimizerConfigOptions

2019-07-29 Thread GitBox
asfgit closed pull request #9203: [FLINK-13375][table-api] Improve config names 
in ExecutionConfigOptions and OptimizerConfigOptions
URL: https://github.com/apache/flink/pull/9203
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13347) should handle new JoinRelType(SEMI/ANTI) in switch case

2019-07-29 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13347:
---

Assignee: godfrey he

> should handle new JoinRelType(SEMI/ANTI) in switch case
> ---
>
> Key: FLINK-13347
> URL: https://issues.apache.org/jira/browse/FLINK-13347
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner, Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Calcite 1.20 introduces {{SEMI}} & {{ANTI}} to {{JoinRelType}}, blink planner 
> & flink planner should handle them in each switch case



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9247: [FLINK-13386][web]: Fix frictions in the new default Web Frontend

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9247: [FLINK-13386][web]: Fix frictions in 
the new default Web Frontend
URL: https://github.com/apache/flink/pull/9247#issuecomment-515660312
 
 
   ## CI report:
   
   * dbe883c57e689ed544de09423192843c758bfa54 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120935623)
   * 706ebbe34b99bc2c9e14dfc92ad3c683b566f147 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120936069)
   * 12fa022f25d91065fd2eeb91e29118611b8ac5c6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120936600)
   * 314bdcc4b411ad226124d42704412d2c176c3648 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120937528)
   * b56ba2a95d8f0fdf705977058adb9ef9f08d17c0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120937670)
   * cbe85dd799b686faf1d6a1b67f51cac2cd1c94fe : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120938960)
   * 89e02cdd75bc8fec6bf03e26ec9bd26b8b231cda : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121184053)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13489) Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-13489:
---

 Summary: Heavy deployment end-to-end test fails on Travis with TM 
heartbeat timeout
 Key: FLINK-13489
 URL: https://issues.apache.org/jira/browse/FLINK-13489
 Project: Flink
  Issue Type: Bug
  Components: Test Infrastructure
Reporter: Tzu-Li (Gordon) Tai


https://api.travis-ci.org/v3/job/564925128/log.txt

{code}

 The program finished with the following exception:

org.apache.flink.client.program.ProgramInvocationException: Job failed. (JobID: 
1b4f1807cc749628cfc1bdf04647527a)
at 
org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:250)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:338)
at 
org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:60)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1507)
at 
org.apache.flink.deployment.HeavyDeploymentStressTestProgram.main(HeavyDeploymentStressTestProgram.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:576)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:438)
at 
org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:274)
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:746)
at 
org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:273)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:205)
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1010)
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:1083)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at 
org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1083)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution 
failed.
at 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:146)
at 
org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:247)
... 21 more
Caused by: java.util.concurrent.TimeoutException: Heartbeat of TaskManager with 
id ea456d6a590eca7598c19c4d35e56db9 timed out.
at 
org.apache.flink.runtime.jobmaster.JobMaster$TaskManagerHeartbeatListener.notifyHeartbeatTimeout(JobMaster.java:1149)
at 
org.apache.flink.runtime.heartbeat.HeartbeatManagerImpl$HeartbeatMonitor.run(HeartbeatManagerImpl.java:318)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRunAsync(AkkaRpcActor.java:397)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:190)
at 
org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
at 

[jira] [Updated] (FLINK-13377) Streaming SQL e2e test failed on travis

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-13377:

Priority: Blocker  (was: Major)

> Streaming SQL e2e test failed on travis
> ---
>
> Key: FLINK-13377
> URL: https://issues.apache.org/jira/browse/FLINK-13377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Blocker
> Fix For: 1.9.0, 1.10.0
>
> Attachments: 198.jpg, 495.jpg
>
>
> This is an instance: [https://api.travis-ci.org/v3/job/562011491/log.txt]
> ==
>  Running 'Streaming SQL end-to-end test' 
> ==
>  TEST_DATA_DIR: 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314
>  Flink dist directory: 
> /home/travis/build/apache/flink/flink-dist/target/flink-1.9-SNAPSHOT-bin/flink-1.9-SNAPSHOT
>  Starting cluster. Starting standalonesession daemon on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Dispatcher 
> REST endpoint is up. [INFO] 1 instance(s) of taskexecutor are already running 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor 
> daemon on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 2 
> instance(s) of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 3 instance(s) 
> of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting execution 
> of program Program execution finished Job with JobID 
> 7c7b66dd4e8dc17e229700b1c746aba6 has finished. Job Runtime: 77371 ms cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/.part-*':
>  No such file or directory cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/part-*':
>  No such file or directory FAIL StreamSQL: Output hash mismatch. Got 
> d41d8cd98f00b204e9800998ecf8427e, expected b29f14ed221a936211202ff65b51ee26. 
> head hexdump of actual: Stopping taskexecutor daemon (pid: 9983) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping standalonesession 
> daemon (pid: 8088) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. 
> Skipping taskexecutor daemon (pid: 21571), because it is not running anymore 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor 
> daemon (pid: 22154), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 22595), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 30622), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 3850), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4405), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4839), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 8531) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping 
> taskexecutor daemon (pid: 9077) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 9518) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [FAIL] 
> Test script contains errors. Checking of logs skipped. [FAIL] 'Streaming SQL 
> end-to-end test' failed after 1 minutes and 51 seconds! Test exited with exit 
> code 1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13377) Streaming SQL e2e test failed on travis

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895763#comment-16895763
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-13377:
-

Updated to be a blocker.

> Streaming SQL e2e test failed on travis
> ---
>
> Key: FLINK-13377
> URL: https://issues.apache.org/jira/browse/FLINK-13377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
> Attachments: 198.jpg, 495.jpg
>
>
> This is an instance: [https://api.travis-ci.org/v3/job/562011491/log.txt]
> ==
>  Running 'Streaming SQL end-to-end test' 
> ==
>  TEST_DATA_DIR: 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314
>  Flink dist directory: 
> /home/travis/build/apache/flink/flink-dist/target/flink-1.9-SNAPSHOT-bin/flink-1.9-SNAPSHOT
>  Starting cluster. Starting standalonesession daemon on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Dispatcher 
> REST endpoint is up. [INFO] 1 instance(s) of taskexecutor are already running 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor 
> daemon on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 2 
> instance(s) of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 3 instance(s) 
> of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting execution 
> of program Program execution finished Job with JobID 
> 7c7b66dd4e8dc17e229700b1c746aba6 has finished. Job Runtime: 77371 ms cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/.part-*':
>  No such file or directory cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/part-*':
>  No such file or directory FAIL StreamSQL: Output hash mismatch. Got 
> d41d8cd98f00b204e9800998ecf8427e, expected b29f14ed221a936211202ff65b51ee26. 
> head hexdump of actual: Stopping taskexecutor daemon (pid: 9983) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping standalonesession 
> daemon (pid: 8088) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. 
> Skipping taskexecutor daemon (pid: 21571), because it is not running anymore 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor 
> daemon (pid: 22154), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 22595), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 30622), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 3850), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4405), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4839), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 8531) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping 
> taskexecutor daemon (pid: 9077) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 9518) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [FAIL] 
> Test script contains errors. Checking of logs skipped. [FAIL] 'Streaming SQL 
> end-to-end test' failed after 1 minutes and 51 seconds! Test exited with exit 
> code 1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] Myasuka opened a new pull request #9268: [FLINK-13452] Ensure to fail global when exception happens during reseting tasks of regions

2019-07-29 Thread GitBox
Myasuka opened a new pull request #9268: [FLINK-13452] Ensure to fail global 
when exception happens during reseting tasks of regions
URL: https://github.com/apache/flink/pull/9268
 
 
   ## What is the purpose of the change
   
   After [FLINK-13060](https://issues.apache.org/jira/browse/FLINK-13060), we 
would run `createResetAndRescheduleTasksCallback` within another runnable 
`resetAndRescheduleTasks`. Unfortunately, any exception happened in 
`createResetAndRescheduleTasksCallback` would cause the thread terminated 
silently but record the exception in `outcome` of `FutureTask`. We should 
change the code back to previously that would `failGlobal` within the 
`createResetAndRescheduleTasksCallback` runnable.
   
   
   ## Brief change log
   
 - Let runnable `createResetAndRescheduleTasksCallback` fail global if come 
across any exception.
 - Refine `RegionFailoverITCase` to mock the exception that checkpoint 
store would failed when recover from checkpoint for the 1st time.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
 - Refine `RegionFailoverITCase` to mock the exception that checkpoint 
store would failed when recover from checkpoint for the 1st time.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no**
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **no**
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: **no**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
 - If yes, how is the feature documented? **not applicable**
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation of Hive source/sink

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9217: [FLINK-13277][hive] add documentation 
of Hive source/sink
URL: https://github.com/apache/flink/pull/9217#issuecomment-514589043
 
 
   ## CI report:
   
   * 516e655f7f0853d6585ae5de2fbecc438d57e474 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120432519)
   * fee6f2df235f113b7757ce436ee127711b0094e6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121184693)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13481) allow user launch job on yarn from SQL Client command line

2019-07-29 Thread Hongtao Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895776#comment-16895776
 ] 

Hongtao Zhang commented on FLINK-13481:
---

[~zjffdu] sounds good.

 

so what is the progress of the improvement. can we use the improvement in 1.9.0 
release ?

 

> allow user launch job on yarn from SQL Client command line
> --
>
> Key: FLINK-13481
> URL: https://issues.apache.org/jira/browse/FLINK-13481
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: Flink 1.10
> CDH 5.13.3
>  
>  
>Reporter: Hongtao Zhang
>Priority: Critical
> Fix For: 1.10.0
>
>
> Flink SQL Client active command line doesn't load the FlinkYarnSessionCli 
> general options
> the general options contains "addressOption" which user can specify 
> --jobmanager="yarn-cluster" or -m to run the SQL on YARN Cluster
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308531362
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308531539
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[jira] [Updated] (FLINK-13377) Streaming SQL e2e test failed on travis

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-13377:

Affects Version/s: (was: 1.10.0)

> Streaming SQL e2e test failed on travis
> ---
>
> Key: FLINK-13377
> URL: https://issues.apache.org/jira/browse/FLINK-13377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.9.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 198.jpg, 495.jpg
>
>
> This is an instance: [https://api.travis-ci.org/v3/job/562011491/log.txt]
> ==
>  Running 'Streaming SQL end-to-end test' 
> ==
>  TEST_DATA_DIR: 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314
>  Flink dist directory: 
> /home/travis/build/apache/flink/flink-dist/target/flink-1.9-SNAPSHOT-bin/flink-1.9-SNAPSHOT
>  Starting cluster. Starting standalonesession daemon on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Dispatcher 
> REST endpoint is up. [INFO] 1 instance(s) of taskexecutor are already running 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor 
> daemon on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 2 
> instance(s) of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 3 instance(s) 
> of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting execution 
> of program Program execution finished Job with JobID 
> 7c7b66dd4e8dc17e229700b1c746aba6 has finished. Job Runtime: 77371 ms cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/.part-*':
>  No such file or directory cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/part-*':
>  No such file or directory FAIL StreamSQL: Output hash mismatch. Got 
> d41d8cd98f00b204e9800998ecf8427e, expected b29f14ed221a936211202ff65b51ee26. 
> head hexdump of actual: Stopping taskexecutor daemon (pid: 9983) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping standalonesession 
> daemon (pid: 8088) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. 
> Skipping taskexecutor daemon (pid: 21571), because it is not running anymore 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor 
> daemon (pid: 22154), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 22595), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 30622), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 3850), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4405), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4839), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 8531) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping 
> taskexecutor daemon (pid: 9077) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 9518) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [FAIL] 
> Test script contains errors. Checking of logs skipped. [FAIL] 'Streaming SQL 
> end-to-end test' failed after 1 minutes and 51 seconds! Test exited with exit 
> code 1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13377) Streaming SQL e2e test failed on travis

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-13377:

Fix Version/s: (was: 1.10.0)

> Streaming SQL e2e test failed on travis
> ---
>
> Key: FLINK-13377
> URL: https://issues.apache.org/jira/browse/FLINK-13377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Blocker
> Fix For: 1.9.0
>
> Attachments: 198.jpg, 495.jpg
>
>
> This is an instance: [https://api.travis-ci.org/v3/job/562011491/log.txt]
> ==
>  Running 'Streaming SQL end-to-end test' 
> ==
>  TEST_DATA_DIR: 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314
>  Flink dist directory: 
> /home/travis/build/apache/flink/flink-dist/target/flink-1.9-SNAPSHOT-bin/flink-1.9-SNAPSHOT
>  Starting cluster. Starting standalonesession daemon on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Dispatcher 
> REST endpoint is up. [INFO] 1 instance(s) of taskexecutor are already running 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor 
> daemon on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 2 
> instance(s) of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 3 instance(s) 
> of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting execution 
> of program Program execution finished Job with JobID 
> 7c7b66dd4e8dc17e229700b1c746aba6 has finished. Job Runtime: 77371 ms cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/.part-*':
>  No such file or directory cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/part-*':
>  No such file or directory FAIL StreamSQL: Output hash mismatch. Got 
> d41d8cd98f00b204e9800998ecf8427e, expected b29f14ed221a936211202ff65b51ee26. 
> head hexdump of actual: Stopping taskexecutor daemon (pid: 9983) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping standalonesession 
> daemon (pid: 8088) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. 
> Skipping taskexecutor daemon (pid: 21571), because it is not running anymore 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor 
> daemon (pid: 22154), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 22595), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 30622), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 3850), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4405), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4839), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 8531) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping 
> taskexecutor daemon (pid: 9077) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 9518) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [FAIL] 
> Test script contains errors. Checking of logs skipped. [FAIL] 'Streaming SQL 
> end-to-end test' failed after 1 minutes and 51 seconds! Test exited with exit 
> code 1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13488) flink-python fails to build on Travis due to PackagesNotFoundError

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-13488:

Fix Version/s: 1.9.0

> flink-python fails to build on Travis due to PackagesNotFoundError
> --
>
> Key: FLINK-13488
> URL: https://issues.apache.org/jira/browse/FLINK-13488
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Test Infrastructure
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.9.0
>
>
> https://api.travis-ci.org/v3/job/564925115/log.txt
> {code}
> install conda ... [SUCCESS]
> install miniconda... [SUCCESS]
> installing python environment...
> installing python2.7...
> install python2.7... [SUCCESS]
> installing python3.3...
> PackagesNotFoundError: The following packages are not available from current 
> channels:
>   - python=3.3
> Current channels:
>   - https://repo.anaconda.com/pkgs/main/linux-64
>   - https://repo.anaconda.com/pkgs/main/noarch
>   - https://repo.anaconda.com/pkgs/r/linux-64
>   - https://repo.anaconda.com/pkgs/r/noarch
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-9900) Failed to testRestoreBehaviourWithFaultyStateHandles (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase)

2019-07-29 Thread Biao Liu (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-9900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895793#comment-16895793
 ] 

Biao Liu commented on FLINK-9900:
-

Hi [~till.rohrmann], I have built a PR to fix this. The unstable scenario is a 
bit complicated. I described the details in PR, 
https://github.com/apache/flink/pull/9269.


> Failed to testRestoreBehaviourWithFaultyStateHandles 
> (org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase) 
> ---
>
> Key: FLINK-9900
> URL: https://issues.apache.org/jira/browse/FLINK-9900
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.5.1, 1.6.0, 1.9.0
>Reporter: zhangminglei
>Priority: Blocker
>  Labels: pull-request-available, test-stability
> Fix For: 1.9.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://api.travis-ci.org/v3/job/405843617/log.txt
> Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 124.598 sec 
> <<< FAILURE! - in 
> org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase
>  
> testRestoreBehaviourWithFaultyStateHandles(org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase)
>  Time elapsed: 120.036 sec <<< ERROR!
>  org.junit.runners.model.TestTimedOutException: test timed out after 12 
> milliseconds
>  at sun.misc.Unsafe.park(Native Method)
>  at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>  at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1693)
>  at java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>  at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1729)
>  at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
>  at 
> org.apache.flink.test.checkpointing.ZooKeeperHighAvailabilityITCase.testRestoreBehaviourWithFaultyStateHandles(ZooKeeperHighAvailabilityITCase.java:244)
> Results :
> Tests in error: 
>  
> ZooKeeperHighAvailabilityITCase.testRestoreBehaviourWithFaultyStateHandles:244
>  » TestTimedOut
> Tests run: 1453, Failures: 0, Errors: 1, Skipped: 29



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9236: [FLINK-13283][FLINK-13490][jdbc] Fix JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support and null checking

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9236: [FLINK-13283][FLINK-13490][jdbc] Fix 
JDBC connectors with DataTypes.DATE/TIME/TIMESTAMP support and null checking
URL: https://github.com/apache/flink/pull/9236#issuecomment-515390325
 
 
   ## CI report:
   
   * 1135cc72f00606c7a230714838c938068887ce23 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120833949)
   * a1070517ff96b110db9a38e3daf28e92eccf236d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121193197)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9237: [FLINK-13431][hive] NameNode HA 
configuration was not loaded when running HiveConnector on Yarn
URL: https://github.com/apache/flink/pull/9237#issuecomment-515414321
 
 
   ## CI report:
   
   * 2c59c8b33bcbc200978ed7b5ad27311ada599aab : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120842331)
   * 61008edcd78722ccd9ed143a2bd005ca91ee39b4 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121050765)
   * 221bf689e1968f83bc99d862b3522a9ad7d06829 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121178448)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zjuwangg commented on a change in pull request #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37

2019-07-29 Thread GitBox
zjuwangg commented on a change in pull request #9239: [FLINK-13385]Align Hive 
data type mapping with FLIP-37
URL: https://github.com/apache/flink/pull/9239#discussion_r308503900
 
 

 ##
 File path: 
flink-connectors/flink-connector-hive/src/main/java/org/apache/flink/table/catalog/hive/util/HiveTypeUtil.java
 ##
 @@ -85,65 +86,68 @@ public static TypeInfo toHiveTypeInfo(DataType dataType) {
LogicalTypeRoot type = dataType.getLogicalType().getTypeRoot();
 
if (dataType instanceof AtomicDataType) {
-   if (type.equals(LogicalTypeRoot.BOOLEAN)) {
-   return TypeInfoFactory.booleanTypeInfo;
-   } else if (type.equals(LogicalTypeRoot.TINYINT)) {
-   return TypeInfoFactory.byteTypeInfo;
-   } else if (type.equals(LogicalTypeRoot.SMALLINT)) {
-   return TypeInfoFactory.shortTypeInfo;
-   } else if (type.equals(LogicalTypeRoot.INTEGER)) {
-   return TypeInfoFactory.intTypeInfo;
-   } else if (type.equals(LogicalTypeRoot.BIGINT)) {
-   return TypeInfoFactory.longTypeInfo;
-   } else if (type.equals(LogicalTypeRoot.FLOAT)) {
-   return TypeInfoFactory.floatTypeInfo;
-   } else if (type.equals(LogicalTypeRoot.DOUBLE)) {
-   return TypeInfoFactory.doubleTypeInfo;
-   } else if (type.equals(LogicalTypeRoot.DATE)) {
-   return TypeInfoFactory.dateTypeInfo;
-   } else if 
(type.equals(LogicalTypeRoot.TIMESTAMP_WITHOUT_TIME_ZONE)) {
-   return TypeInfoFactory.timestampTypeInfo;
-   } else if (type.equals(LogicalTypeRoot.BINARY) || 
type.equals(LogicalTypeRoot.VARBINARY)) {
-   // Hive doesn't support variable-length binary 
string
-   return TypeInfoFactory.binaryTypeInfo;
-   } else if (type.equals(LogicalTypeRoot.CHAR)) {
-   CharType charType = (CharType) 
dataType.getLogicalType();
-
-   if (charType.getLength() > 
HiveChar.MAX_CHAR_LENGTH) {
-   throw new CatalogException(
-   String.format("HiveCatalog 
doesn't support char type with length of '%d'. " +
-   "The maximum 
length is %d",
-   charType.getLength(), 
HiveChar.MAX_CHAR_LENGTH));
+   switch (type) {
+   case BOOLEAN:
+   return TypeInfoFactory.booleanTypeInfo;
+   case TINYINT:
+   return TypeInfoFactory.byteTypeInfo;
+   case SMALLINT:
+   return TypeInfoFactory.shortTypeInfo;
+   case INTEGER:
+   return TypeInfoFactory.intTypeInfo;
+   case BIGINT:
+   return TypeInfoFactory.longTypeInfo;
+   case FLOAT:
+   return TypeInfoFactory.floatTypeInfo;
+   case DOUBLE:
+   return TypeInfoFactory.doubleTypeInfo;
+   case DATE:
+   return TypeInfoFactory.dateTypeInfo;
+   case TIMESTAMP_WITHOUT_TIME_ZONE:
+   return 
TypeInfoFactory.timestampTypeInfo;
+   case CHAR: {
+   CharType charType = (CharType) 
dataType.getLogicalType();
+   if (charType.getLength() > 
HiveChar.MAX_CHAR_LENGTH) {
+   throw new CatalogException(
+   
String.format("HiveCatalog doesn't support char type with length of '%d'. " +
+   
"The maximum length is %d",
+   
charType.getLength(), HiveChar.MAX_CHAR_LENGTH));
+   }
+   return 
TypeInfoFactory.getCharTypeInfo(charType.getLength());
}
-
-   return 
TypeInfoFactory.getCharTypeInfo(charType.getLength());
-   } else if 

[jira] [Commented] (FLINK-13386) Frictions in the new default Web Frontend

2019-07-29 Thread Yadong Xie (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895716#comment-16895716
 ] 

Yadong Xie commented on FLINK-13386:


and the scroll bug in firefox is caused by monaco-editor

it is fixed in monaco-editor master branch, but not release yet

[https://github.com/Microsoft/monaco-editor/issues/1353]

> Frictions in the new default Web Frontend
> -
>
> Key: FLINK-13386
> URL: https://issues.apache.org/jira/browse/FLINK-13386
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Web Frontend
>Affects Versions: 1.9.0
>Reporter: Dawid Wysakowicz
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0
>
> Attachments: bug.png, repro.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> While manually testing the new WebUI I found a few frictions.
> * when using the UI the left panel hides unexpectedly at random moments
> * mouse wheel does not work on the logs (taskmanager, jobmanager) pane
> * the jobmanager configuration is not sorted
> * different sorting of the operators (the old UI showed the sources first)
> * the drop-down list for choosing operator/tasks metrics is not sorted, which 
> makes it super hard to screen through available metrics
> * arrow does not touch the rectangles in Chrome (see attached screenshot)
> There are also some views missing in the new UI that I personally found 
> useful in the old UI:
> * can't see watermarks for all operators at once
> * no numeric metrics (only graphs)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9239: [FLINK-13385]Align Hive data type 
mapping with FLIP-37
URL: https://github.com/apache/flink/pull/9239#issuecomment-515435805
 
 
   ## CI report:
   
   * bb0663ddbb6eeda06b756c4ffc7094e64dbdb5b9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120851212)
   * 86a460407693769f0d2afaa3597c70f202126099 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121104847)
   * 5c25e802c096e2688e6cfa01ff7f74d3c050eef5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121108411)
   * c26538e93fcad20bd337b3766ccdfc30d46380fd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121114320)
   * 4f32c8d7f8d14601e8caa28d1079ae3fdce0873e : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121181320)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hongtao12310 closed pull request #9267: Merge pull request #3 from apache/master

2019-07-29 Thread GitBox
hongtao12310 closed pull request #9267: Merge pull request #3 from apache/master
URL: https://github.com/apache/flink/pull/9267
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9267: Merge pull request #3 from apache/master

2019-07-29 Thread GitBox
flinkbot commented on issue #9267: Merge pull request #3 from apache/master
URL: https://github.com/apache/flink/pull/9267#issuecomment-516244907
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong edited a comment on issue #9261: [FLINK-13399][table] Create two separate table uber jars for old and blink planners

2019-07-29 Thread GitBox
wuchong edited a comment on issue #9261:  [FLINK-13399][table] Create two 
separate table uber jars for old and blink planners
URL: https://github.com/apache/flink/pull/9261#issuecomment-516248674
 
 
   Sure @twalthr , @docete  will look into the streaming e2e test. We also find 
the streaming e2e is not stable on travis (FLINK-13377). We will look into both 
problems. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-13377) Streaming SQL e2e test failed on travis

2019-07-29 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-13377:
---

Assignee: Zhenghua Gao

> Streaming SQL e2e test failed on travis
> ---
>
> Key: FLINK-13377
> URL: https://issues.apache.org/jira/browse/FLINK-13377
> Project: Flink
>  Issue Type: Bug
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Zhenghua Gao
>Assignee: Zhenghua Gao
>Priority: Major
> Fix For: 1.9.0, 1.10.0
>
> Attachments: 198.jpg, 495.jpg
>
>
> This is an instance: [https://api.travis-ci.org/v3/job/562011491/log.txt]
> ==
>  Running 'Streaming SQL end-to-end test' 
> ==
>  TEST_DATA_DIR: 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314
>  Flink dist directory: 
> /home/travis/build/apache/flink/flink-dist/target/flink-1.9-SNAPSHOT-bin/flink-1.9-SNAPSHOT
>  Starting cluster. Starting standalonesession daemon on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Waiting for 
> dispatcher REST endpoint to come up... Waiting for dispatcher REST endpoint 
> to come up... Waiting for dispatcher REST endpoint to come up... Dispatcher 
> REST endpoint is up. [INFO] 1 instance(s) of taskexecutor are already running 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor 
> daemon on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 2 
> instance(s) of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [INFO] 3 instance(s) 
> of taskexecutor are already running on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting taskexecutor daemon 
> on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Starting execution 
> of program Program execution finished Job with JobID 
> 7c7b66dd4e8dc17e229700b1c746aba6 has finished. Job Runtime: 77371 ms cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/.part-*':
>  No such file or directory cat: 
> '/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-47211990314/out/result/20/part-*':
>  No such file or directory FAIL StreamSQL: Output hash mismatch. Got 
> d41d8cd98f00b204e9800998ecf8427e, expected b29f14ed221a936211202ff65b51ee26. 
> head hexdump of actual: Stopping taskexecutor daemon (pid: 9983) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping standalonesession 
> daemon (pid: 8088) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. 
> Skipping taskexecutor daemon (pid: 21571), because it is not running anymore 
> on travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor 
> daemon (pid: 22154), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 22595), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 30622), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 3850), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4405), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Skipping taskexecutor daemon 
> (pid: 4839), because it is not running anymore on 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 8531) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping 
> taskexecutor daemon (pid: 9077) on host 
> travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. Stopping taskexecutor daemon 
> (pid: 9518) on host travis-job-3ac15c01-1a7d-48b2-b4a9-86575f5d4641. [FAIL] 
> Test script contains errors. Checking of logs skipped. [FAIL] 'Streaming SQL 
> end-to-end test' failed after 1 minutes and 51 seconds! Test exited with exit 
> code 1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308528563
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -23,5 +23,658 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
 * This will be replaced by the TOC
 {:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as well as the data generator are running ("Up").
+

[jira] [Created] (FLINK-13488) flink-python fails to build on Travis due to PackagesNotFoundError

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-13488:
---

 Summary: flink-python fails to build on Travis due to 
PackagesNotFoundError
 Key: FLINK-13488
 URL: https://issues.apache.org/jira/browse/FLINK-13488
 Project: Flink
  Issue Type: Bug
  Components: API / Python, Test Infrastructure
Reporter: Tzu-Li (Gordon) Tai


https://api.travis-ci.org/v3/job/564925115/log.txt

{code}
install conda ... [SUCCESS]
install miniconda... [SUCCESS]
installing python environment...
installing python2.7...
install python2.7... [SUCCESS]
installing python3.3...

PackagesNotFoundError: The following packages are not available from current 
channels:

  - python=3.3

Current channels:

  - https://repo.anaconda.com/pkgs/main/linux-64
  - https://repo.anaconda.com/pkgs/main/noarch
  - https://repo.anaconda.com/pkgs/r/linux-64
  - https://repo.anaconda.com/pkgs/r/noarch
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308528408
 
 

 ##
 File path: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/ClickEventGenerator.java
 ##
 @@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.examples.windowing.clickeventcount;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEvent;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventSerializationSchema;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.common.serialization.ByteArraySerializer;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+import static 
org.apache.flink.streaming.examples.windowing.clickeventcount.ClickEventCount.WINDOW_SIZE;
+
+/**
+ * A generator which pushes {@link ClickEvent}s into a Kafka Topic configured 
via `--topic` and
+ * `--bootstrap.servers`.
+ *
+ *  The generator creates the same number of {@link ClickEvent}s for all 
pages. The delay between
+ * events is chosen such that processing time and event time roughly align. 
The generator always
+ * creates the same sequence of events. 
+ *
+ */
+public class ClickEventGenerator {
+
+   public static final int EVENTS_PER_WINDOW = 1000;
+
+   private static final List pages = Arrays.asList("/help", 
"/index", "/shop", "/jobs", "/about", "/news");
+
+   private static KafkaProducer producer;
+
+   private static Map nextTimestampPerKey = new HashMap<>();
+   private static int nextPageIndex = 0;
+
+   //this calculation is only accurate as long as pages.size() * 
EVENTS_PER_WINDOW divides the
+   //window size
+   public static final long DELAY = WINDOW_SIZE.toMilliseconds() / 
pages.size() / EVENTS_PER_WINDOW;
+
+   public static void main(String[] args) throws Exception {
+
+   final ParameterTool params = ParameterTool.fromArgs(args);
+
+   String topic = params.get("topic", "input");
+
+   Properties kafkaProps = createKafkaProperties(params);
+   producer = new KafkaProducer<>(kafkaProps);
+
+   while (true) {
+
+   String page = nextPage();
+   ClickEvent event = new ClickEvent(nextTimestamp(page), 
page);
+
+   ProducerRecord record = new 
ClickEventSerializationSchema(topic).serialize(
+   event,
+   null);
+
+   producer.send(record);
+
+   Thread.sleep(DELAY);
+   }
+   }
+
+   private static Properties createKafkaProperties(final ParameterTool 
params) {
+   String brokers = params.get("bootstrap.servers", 
"localhost:9092");
+   Properties kafkaProps = new Properties();
+   kafkaProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, 
brokers);
+   kafkaProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, 
ByteArraySerializer.class.getCanonicalName());
+   kafkaProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
ByteArraySerializer.class.getCanonicalName());
+   return kafkaProps;
+   }
+
+   public static long nextTimestamp(String page) {
 
 Review comment:
   Arguably a better design, yes :) Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] hongtao12310 commented on issue #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn

2019-07-29 Thread GitBox
hongtao12310 commented on issue #9237: [FLINK-13431][hive] NameNode HA 
configuration was not loaded when running HiveConnector on Yarn
URL: https://github.com/apache/flink/pull/9237#issuecomment-516258316
 
 
   @lirui-apache @xuefuz Now the changes I made was in the createHiveConf 
function. and the function is a static function. we can simply create the 
function use the HiveCatalog class. so i think the test should related to 
HiveCatalog tests not HiveCatalogFactory. but so far sounds like we didn't have 
any test cases for HiveCatalog
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308530506
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
 
 Review comment:
   Yes, this is nicer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308530515
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[jira] [Updated] (FLINK-13487) TaskExecutorPartitionLifecycleTest.testPartitionReleaseAfterReleaseCall failed on Travis

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tzu-Li (Gordon) Tai updated FLINK-13487:

Fix Version/s: 1.9.0

> TaskExecutorPartitionLifecycleTest.testPartitionReleaseAfterReleaseCall 
> failed on Travis
> 
>
> Key: FLINK-13487
> URL: https://issues.apache.org/jira/browse/FLINK-13487
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Task, Tests
>Reporter: Tzu-Li (Gordon) Tai
>Priority: Blocker
> Fix For: 1.9.0
>
>
> https://api.travis-ci.org/v3/job/564925114/log.txt
> {code}
> 21:14:47.090 [ERROR] Tests run: 4, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 5.754 s <<< FAILURE! - in 
> org.apache.flink.runtime.taskexecutor.TaskExecutorPartitionLifecycleTest
> 21:14:47.090 [ERROR] 
> testPartitionReleaseAfterReleaseCall(org.apache.flink.runtime.taskexecutor.TaskExecutorPartitionLifecycleTest)
>   Time elapsed: 0.136 s  <<< ERROR!
> java.util.concurrent.ExecutionException: 
> org.apache.flink.runtime.taskexecutor.exceptions.TaskSubmissionException: 
> Could not submit task because there is no JobManager associated for the job 
> 2a0ab40cb53241799b71ff6fd2f53d3d.
>   at 
> org.apache.flink.runtime.taskexecutor.TaskExecutorPartitionLifecycleTest.testPartitionRelease(TaskExecutorPartitionLifecycleTest.java:331)
>   at 
> org.apache.flink.runtime.taskexecutor.TaskExecutorPartitionLifecycleTest.testPartitionReleaseAfterReleaseCall(TaskExecutorPartitionLifecycleTest.java:201)
> Caused by: 
> org.apache.flink.runtime.taskexecutor.exceptions.TaskSubmissionException: 
> Could not submit task because there is no JobManager associated for the job 
> 2a0ab40cb53241799b71ff6fd2f53d3d.
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9269: [FLINK-9900][tests] Fix unstable ZooKeeperHighAvailabilityITCase

2019-07-29 Thread GitBox
flinkbot commented on issue #9269: [FLINK-9900][tests] Fix unstable 
ZooKeeperHighAvailabilityITCase
URL: https://github.com/apache/flink/pull/9269#issuecomment-516265744
 
 
   ## CI report:
   
   * cb68c75d9078f8631c89ee30b7f6b1309b189be3 : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121192115)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-12845) Execute multiple statements in command line or sql script file

2019-07-29 Thread frank wang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895718#comment-16895718
 ] 

frank wang commented on FLINK-12845:


[~docete],hi,Can you assign this issue to me? thx

> Execute multiple statements in command line or sql script file
> --
>
> Key: FLINK-12845
> URL: https://issues.apache.org/jira/browse/FLINK-12845
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Client
>Reporter: Zhenghua Gao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> User may copy multiple statements and paste them on command line GUI of SQL 
> Client, or User may pass a script file(using SOURCE command or -f option), we 
> should parse and execute them one by one(like other sql cli applications)



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13475) Reduce dependency on third-party maven repositories

2019-07-29 Thread Terry Wang (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Terry Wang updated FLINK-13475:
---
Attachment: flink-connector-hive-dependency.txt

> Reduce dependency on third-party maven repositories
> ---
>
> Key: FLINK-13475
> URL: https://issues.apache.org/jira/browse/FLINK-13475
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Hive
>Affects Versions: 1.9.0
>Reporter: Till Rohrmann
>Priority: Critical
> Fix For: 1.9.0, 1.10.0
>
> Attachments: flink-connector-hive-dependency.txt
>
>
> A user reported that Flink's Hive connectors requires third-party maven 
> repositories which are not everywhere accessible in order to build. 
> Concretely, the hive connector requires access to Conjars for 
> org.pentaho:pentaho-aggdesigner-algorithm and javax.jms:jms:jar:1.1.
> It would be great to reduce the dependency on third-party maven repositories 
> if possible. For future reference, other projects faced similar problems: 
> CALCITE-605, CALCITE-1474



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] zjuwangg commented on issue #9237: [FLINK-13431][hive] NameNode HA configuration was not loaded when running HiveConnector on Yarn

2019-07-29 Thread GitBox
zjuwangg commented on issue #9237: [FLINK-13431][hive] NameNode HA 
configuration was not loaded when running HiveConnector on Yarn
URL: https://github.com/apache/flink/pull/9237#issuecomment-516243541
 
 
   Thanks for your efforts @hongtao12310 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #9267: Merge pull request #3 from apache/master

2019-07-29 Thread GitBox
flinkbot commented on issue #9267: Merge pull request #3 from apache/master
URL: https://github.com/apache/flink/pull/9267#issuecomment-516246008
 
 
   ## CI report:
   
   * 4c00f76596da5e258c2b642525505cfd8327af9d : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121185985)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13481) allow user launch job on yarn from SQL Client command line

2019-07-29 Thread Jeff Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895747#comment-16895747
 ] 

Jeff Zhang commented on FLINK-13481:


[~hongtao12310] In my opinion, it would be better to do it after we improve the 
flink client api. After that we will unify all the execution modes via a single 
entry point so that all the flink downstream project (including sql-client) 
will benefit from it.  
http://mail-archives.apache.org/mod_mbox/flink-dev/201906.mbox/%3ccaady7x7e4ut8b8qs+w8cfze+vzi2s1bexzofj+oeywx3jnc...@mail.gmail.com%3E

> allow user launch job on yarn from SQL Client command line
> --
>
> Key: FLINK-13481
> URL: https://issues.apache.org/jira/browse/FLINK-13481
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / Client
>Affects Versions: 1.10.0
> Environment: Flink 1.10
> CDH 5.13.3
>  
>  
>Reporter: Hongtao Zhang
>Priority: Critical
> Fix For: 1.10.0
>
>
> Flink SQL Client active command line doesn't load the FlinkYarnSessionCli 
> general options
> the general options contains "addressOption" which user can specify 
> --jobmanager="yarn-cluster" or -m to run the SQL on YARN Cluster
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308541840
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[jira] [Created] (FLINK-13479) Cassandra POJO Sink - Prepared Statement query does not have deterministic ordering of columns - causing prepared statement cache overflow

2019-07-29 Thread Ronak Thakrar (JIRA)
Ronak Thakrar created FLINK-13479:
-

 Summary: Cassandra POJO Sink - Prepared Statement query does not 
have deterministic ordering of columns - causing prepared statement cache 
overflow
 Key: FLINK-13479
 URL: https://issues.apache.org/jira/browse/FLINK-13479
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / Cassandra
Reporter: Ronak Thakrar


While using Cassandra POJO Sink as part of Flink Jobs - prepared statements 
query string which is automatically generated while inserting the data(using 
Mapper.saveQuery method), Cassandra entity does not have deterministic ordering 
enforced-so every time column position is changed a new prepared statement is 
generated and used.  As an effect of that prepared statement query cache is 
overflown because every time when insert statement query string is generated by 
- columns are in random order. 

Following is the detailed explanation for what happens inside the Datastax java 
driver([https://datastax-oss.atlassian.net/browse/JAVA-1587]):

The current Mapper uses random ordering of columns when it creates prepared 
queries. This is fine when only 1 java client is accessing a cluster (and 
assuming the application developer does the correct thing by re-using a 
Mapper), since each Mapper will reused prepared statement. However when you 
have many java clients accessing a cluster, they will each create their own 
permutations of column ordering, and can thrash the prepared statement cache on 
the cluster.

I propose that the Mapper uses a TreeMap instead of a HashMap when it builds 
its set of AliasedMappedProperty - sorted by the column name 
(col.mappedProperty.getMappedName()). This would create a deterministic 
ordering of columns, and all java processes accessing the same cluster would 
end up with the same prepared queries for the same entities.

This issue is already fixed in the Datastax java driver update version(3.3.1) 
which is not used by Flink Cassandra connector (using 3.0.0).

I upgraded the driver version to 3.3.1 locally in Flink Cassandra connector and 
tested, it stopped creating new prepared statements with different ordering of 
column for the same entity. I have the fix for this issue and would like to 
contribute the change and will raise the PR request for the same. 

Flink Cassandra Connector Version: flink-connector-cassandra_2.11

Flink Version: 1.7.1



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Updated] (FLINK-13479) Cassandra POJO Sink - Prepared Statement query does not have deterministic ordering of columns - causing prepared statement cache overflow

2019-07-29 Thread Ronak Thakrar (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ronak Thakrar updated FLINK-13479:
--
Description: 
While using Cassandra POJO Sink as part of Flink Jobs - prepared statements 
query string which is automatically generated while inserting the data(using 
Mapper.saveQuery method), Cassandra entity does not have deterministic ordering 
enforced-so every time column position is changed a new prepared statement is 
generated and used.  As an effect of that prepared statement query cache is 
overflown because every time when insert statement query string is generated by 
- columns are in random order. 

Following is the detailed explanation for what happens inside the Datastax java 
driver([https://datastax-oss.atlassian.net/browse/JAVA-1587]):

The current Mapper uses random ordering of columns when it creates prepared 
queries. This is fine when only 1 java client is accessing a cluster (and 
assuming the application developer does the correct thing by re-using a 
Mapper), since each Mapper will reused prepared statement. However when you 
have many java clients accessing a cluster, they will each create their own 
permutations of column ordering, and can thrash the prepared statement cache on 
the cluster.

I propose that the Mapper uses a TreeMap instead of a HashMap when it builds 
its set of AliasedMappedProperty - sorted by the column name 
(col.mappedProperty.getMappedName()). This would create a deterministic 
ordering of columns, and all java processes accessing the same cluster would 
end up with the same prepared queries for the same entities.

This issue is already fixed in the Datastax java driver update version(3.3.1) 
which is not used by Flink Cassandra connector (using 3.0.0).

I upgraded the driver version to 3.3.1 locally in Flink Cassandra connector and 
tested, it stopped creating new prepared statements with different ordering of 
column for the same entity. I have the fix for this issue and would like to 
contribute the change and will raise the PR request for the same. 

Flink Cassandra Connector Version: flink-connector-cassandra_2.11

Flink Version: 1.7.1

I am creating PR request for the same and which can be merged accordingly and 
re released in new minor release or patch release as required.

  was:
While using Cassandra POJO Sink as part of Flink Jobs - prepared statements 
query string which is automatically generated while inserting the data(using 
Mapper.saveQuery method), Cassandra entity does not have deterministic ordering 
enforced-so every time column position is changed a new prepared statement is 
generated and used.  As an effect of that prepared statement query cache is 
overflown because every time when insert statement query string is generated by 
- columns are in random order. 

Following is the detailed explanation for what happens inside the Datastax java 
driver([https://datastax-oss.atlassian.net/browse/JAVA-1587]):

The current Mapper uses random ordering of columns when it creates prepared 
queries. This is fine when only 1 java client is accessing a cluster (and 
assuming the application developer does the correct thing by re-using a 
Mapper), since each Mapper will reused prepared statement. However when you 
have many java clients accessing a cluster, they will each create their own 
permutations of column ordering, and can thrash the prepared statement cache on 
the cluster.

I propose that the Mapper uses a TreeMap instead of a HashMap when it builds 
its set of AliasedMappedProperty - sorted by the column name 
(col.mappedProperty.getMappedName()). This would create a deterministic 
ordering of columns, and all java processes accessing the same cluster would 
end up with the same prepared queries for the same entities.

This issue is already fixed in the Datastax java driver update version(3.3.1) 
which is not used by Flink Cassandra connector (using 3.0.0).

I upgraded the driver version to 3.3.1 locally in Flink Cassandra connector and 
tested, it stopped creating new prepared statements with different ordering of 
column for the same entity. I have the fix for this issue and would like to 
contribute the change and will raise the PR request for the same. 

Flink Cassandra Connector Version: flink-connector-cassandra_2.11

Flink Version: 1.7.1


> Cassandra POJO Sink - Prepared Statement query does not have deterministic 
> ordering of columns - causing prepared statement cache overflow
> --
>
> Key: FLINK-13479
> URL: https://issues.apache.org/jira/browse/FLINK-13479
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / Cassandra
>Reporter: Ronak Thakrar
>Priority: Major
>
> While using Cassandra 

[GitHub] [flink] flinkbot edited a comment on issue #9239: [FLINK-13385]Align Hive data type mapping with FLIP-37

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9239: [FLINK-13385]Align Hive data type 
mapping with FLIP-37
URL: https://github.com/apache/flink/pull/9239#issuecomment-515435805
 
 
   ## CI report:
   
   * bb0663ddbb6eeda06b756c4ffc7094e64dbdb5b9 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120851212)
   * 86a460407693769f0d2afaa3597c70f202126099 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121104847)
   * 5c25e802c096e2688e6cfa01ff7f74d3c050eef5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121108411)
   * c26538e93fcad20bd337b3766ccdfc30d46380fd : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121114320)
   * 4f32c8d7f8d14601e8caa28d1079ae3fdce0873e : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121181320)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13415) Document how to use hive connector in scala shell

2019-07-29 Thread Jeff Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895712#comment-16895712
 ] 

Jeff Zhang commented on FLINK-13415:


Thanks for the comment [~sjwiesman],I discussed it with [~xuefuz] offline. The 
background is that currently there's some usability issue in sql client, and we 
are afraid that it won't be fixed in 1.9. So that we'd like to also introduce 
another approach (scala shell) to use hive in flink. But you are right, hive 
related doc spread across to many pages. Maybe I can add link in 
`hive_integration.md` to refer this doc `scala_shell.md`. What do you think ?

> Document how to use hive connector in scala shell
> -
>
> Key: FLINK-13415
> URL: https://issues.apache.org/jira/browse/FLINK-13415
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Affects Versions: 1.9.0
>Reporter: Jeff Zhang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot edited a comment on issue #9247: [FLINK-13386][web]: Fix frictions in the new default Web Frontend

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9247: [FLINK-13386][web]: Fix frictions in 
the new default Web Frontend
URL: https://github.com/apache/flink/pull/9247#issuecomment-515660312
 
 
   ## CI report:
   
   * dbe883c57e689ed544de09423192843c758bfa54 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120935623)
   * 706ebbe34b99bc2c9e14dfc92ad3c683b566f147 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120936069)
   * 12fa022f25d91065fd2eeb91e29118611b8ac5c6 : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120936600)
   * 314bdcc4b411ad226124d42704412d2c176c3648 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120937528)
   * b56ba2a95d8f0fdf705977058adb9ef9f08d17c0 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120937670)
   * cbe85dd799b686faf1d6a1b67f51cac2cd1c94fe : SUCCESS 
[Build](https://travis-ci.com/flink-ci/flink/builds/120938960)
   * 89e02cdd75bc8fec6bf03e26ec9bd26b8b231cda : PENDING 
[Build](https://travis-ci.com/flink-ci/flink/builds/121184053)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-13375) Improve config names in ExecutionConfigOptions and OptimizerConfigOptions

2019-07-29 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-13375.
-
Resolution: Resolved

Merged in 1.10.0: dfe3eb0bfd5aa1b20fd54584021e4cd29e01f2e6
Merged in 1.9.0: 51dd916244f9fd52aa23122c47fe59b06e1d7812

> Improve config names in ExecutionConfigOptions and OptimizerConfigOptions
> -
>
> Key: FLINK-13375
> URL: https://issues.apache.org/jira/browse/FLINK-13375
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jark Wu
>Assignee: Jark Wu
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Move ExecutionConfigOptions and OptimizerConfigOptions to table-api.
> We should also go through every config options in detail in this issue. 
> Because we are now moving it to the API module. We should actually discuss 
> how the properties are named and make sure that those options follow Flink 
> naming conventions. 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Resolved] (FLINK-13347) should handle new JoinRelType(SEMI/ANTI) in switch case

2019-07-29 Thread Jark Wu (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-13347.
-
Resolution: Fixed

Fixed in 1.10.0: ef29f305cd3d907d7c445c271b314ea643baaeeb
Fixed in 1.9.0: 3f5b1f80bf0551ba2b59d72c48002b1ed5bf65f1

> should handle new JoinRelType(SEMI/ANTI) in switch case
> ---
>
> Key: FLINK-13347
> URL: https://issues.apache.org/jira/browse/FLINK-13347
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Legacy Planner, Table SQL / Planner
>Reporter: godfrey he
>Assignee: godfrey he
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.9.0, 1.10.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Calcite 1.20 introduces {{SEMI}} & {{ANTI}} to {{JoinRelType}}, blink planner 
> & flink planner should handle them in each switch case



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[jira] [Commented] (FLINK-13381) BinaryHashTableTest and BinaryExternalSorterTest is crashed on Travis

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)


[ 
https://issues.apache.org/jira/browse/FLINK-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16895758#comment-16895758
 ] 

Tzu-Li (Gordon) Tai commented on FLINK-13381:
-

Some other similar occurences, by for {{OuterJoinITCase}} and {{JoinITCase}}:
https://api.travis-ci.org/v3/job/562437488/log.txt
https://api.travis-ci.org/v3/job/562437488/log.txt
https://api.travis-ci.org/v3/job/562437492/log.txt
https://api.travis-ci.org/v3/job/562437492/log.txt

All appeared only on June 18th, though.

> BinaryHashTableTest and BinaryExternalSorterTest  is crashed on Travis
> --
>
> Key: FLINK-13381
> URL: https://issues.apache.org/jira/browse/FLINK-13381
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Reporter: Jark Wu
>Assignee: Jingsong Lee
>Priority: Critical
> Fix For: 1.9.0, 1.10.0
>
>
> Here is an instance of master: 
> https://api.travis-ci.org/v3/job/562437128/log.txt
> Here is an instance of 1.9: https://api.travis-ci.org/v3/job/562380020/log.txt



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] flinkbot commented on issue #9268: [FLINK-13452] Ensure to fail global when exception happens during reseting tasks of regions

2019-07-29 Thread GitBox
flinkbot commented on issue #9268: [FLINK-13452] Ensure to fail global when 
exception happens during reseting tasks of regions
URL: https://github.com/apache/flink/pull/9268#issuecomment-516256693
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-13452) Pipelined region failover strategy does not recover Job if checkpoint cannot be read

2019-07-29 Thread ASF GitHub Bot (JIRA)


 [ 
https://issues.apache.org/jira/browse/FLINK-13452?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-13452:
---
Labels: pull-request-available  (was: )

> Pipelined region failover strategy does not recover Job if checkpoint cannot 
> be read
> 
>
> Key: FLINK-13452
> URL: https://issues.apache.org/jira/browse/FLINK-13452
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.9.0, 1.10.0
>Reporter: Gary Yao
>Assignee: Yun Tang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.9.0
>
> Attachments: jobmanager.log
>
>
> The job does not recover if a checkpoint cannot be read and 
> {{jobmanager.execution.failover-strategy}} is set to _"region"_. 
> *Analysis*
> The {{RestartCallback}} created by 
> {{AdaptedRestartPipelinedRegionStrategyNG}} throws a \{{RuntimeException}} if 
> no checkpoints could be read. When the restart is invoked in a separate 
> thread pool, the exception is swallowed. See:
> [https://github.com/apache/flink/blob/21621fbcde534969b748f21e9f8983e3f4e0fb1d/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/failover/AdaptedRestartPipelinedRegionStrategyNG.java#L117-L119]
> [https://github.com/apache/flink/blob/21621fbcde534969b748f21e9f8983e3f4e0fb1d/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/restart/FixedDelayRestartStrategy.java#L65]
> *Expected behavior*
>  * Job should restart
>  



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] ifndef-SleePy opened a new pull request #9269: [FLINK-9900][tests] Fix unstable ZooKeeperHighAvailabilityITCase

2019-07-29 Thread GitBox
ifndef-SleePy opened a new pull request #9269: [FLINK-9900][tests] Fix unstable 
ZooKeeperHighAvailabilityITCase
URL: https://github.com/apache/flink/pull/9269
 
 
   ## What is the purpose of the change
   
   * Fix unstable 
`ZooKeeperHighAvailabilityITCase`.`testRestoreBehaviourWithFaultyStateHandles`
   
   * The case is designed as below
 - This case assume that the first 5 checkpoints (1-5) would success
 - Then the job blocks on the snapshot of checkpoint 6
 - At this time, the checkpoint files are moved on purpose
 - The checkpoint 6 would fail due to an expected snapshot failure
 - Then the job would be fail due to this failure checkpoint
 - And the job could not recover from checkpoint 5 because there is no 
checkpoint file
 - After moving these checkpoint files back, the job could recover and 
continue working.
   
   * But there is a race condition of failing the job and triggering another 
checkpoint
   * There might be an unexpected successful checkpoint 7 if the job canceling 
is not fast enough
   * This job could recover from checkpoint 7 without waiting these checkpoint 
files moved back
   
   ## Brief change log
   
   * The basic idea of fixing is that preventing the unexpected checkpoint 7
   * Add a latch to block snapshot until the HA storage is recovered
   
   ## Verifying this change
   
   * This change is already covered by existing tests
   * This unstable scenario can be reproduced as below
 - There is a race condition of failing the job and triggering another 
checkpoint
 - Making the job failing more slowly would reproduce the scenario
 - Modify the `FailJobCallback` of `CheckpointFailureManager` in 
`ExecutionGraph`.`enableCheckpointing`, change the `execute` to `schedule` with 
a delay
 - There would be an unexpected successful checkpoint 7
 - This case would hang forever because it never fail 5 times because it 
could recover from checkpoint 7
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308537097
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
+
+{% endif %}
+
+In this section you will setup the playground locally on your machine and 
verify that the Job is 
+running successfully.
+
+This guide assumes that you have [docker](https://docs.docker.com/) (1.12+) and
+[docker-compose](https://docs.docker.com/compose/) (2.1+) installed on your 
machine.
+
+The required configuration files are available in the 
+[flink-playgrounds](https://github.com/apache/flink-playgrounds) repository. 
Check it out and spin
+up the environment:
+
+{% highlight bash %}
+git clone --branch release-{{ site.version }} 
g...@github.com:apache/flink-playgrounds.git
+cd flink-cluster-playground
+docker-compose up -d
+{% endhighlight %}
+
+Afterwards, `docker-compose ps` should give you the following output:
+
+{% highlight bash %}
+ NameCommand   
State   Ports
+
+flink-cluster-playground_clickevent-generator_1   /docker-entrypoint.sh java 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_client_1 /docker-entrypoint.sh flin 
...   Exit 0   
+flink-cluster-playground_jobmanager_1 /docker-entrypoint.sh jobm 
...   Up   6123/tcp, 0.0.0.0:8081->8081/tcp
+flink-cluster-playground_kafka_1  start-kafka.sh   
Up   0.0.0.0:9094->9094/tcp  
+flink-cluster-playground_taskmanager_1/docker-entrypoint.sh task 
...   Up   6123/tcp, 8081/tcp  
+flink-cluster-playground_zookeeper_1  /bin/sh -c /usr/sbin/sshd  
...   Up   2181/tcp, 22/tcp, 2888/tcp, 3888/tcp
+{% endhighlight %}
+
+This indicates that the client container has successfully submitted the Flink 
Job ("Exit 0") and all 
+cluster components as 

[GitHub] [flink] flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - DataStream Example Walkthrough

2019-07-29 Thread GitBox
flinkbot edited a comment on issue #9210: [FLINK-12746][docs] Getting Started - 
DataStream Example Walkthrough
URL: https://github.com/apache/flink/pull/9210#issuecomment-514437706
 
 
   ## CI report:
   
   * 5eb979da047c442c0205464c92b5bd9ee3a740dc : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120299964)
   * d7bf53a30514664925357bd5817305a02553d0a3 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120506936)
   * 02cca7fb6283b84a20ee019159ccb023ccffbd82 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120769129)
   * 5009b10d38eef92f25bfe4ff4608f2dd121ea9c6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120915709)
   * e3b272586d8f41d3800e86134730c4dc427952a6 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120916220)
   * f1aee543a7aef88e3cf052f4d686ab0a8e5938e5 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/120996260)
   * c66060dba290844085f90f554d447c6d7033779d : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121131224)
   * 700e5c19a3d49197ef2b18a646f0b6e1bf783ba8 : FAILURE 
[Build](https://travis-ci.com/flink-ci/flink/builds/121174288)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13481) allow user launch job on yarn from SQL Client command line

2019-07-29 Thread Hongtao Zhang (JIRA)
Hongtao Zhang created FLINK-13481:
-

 Summary: allow user launch job on yarn from SQL Client command line
 Key: FLINK-13481
 URL: https://issues.apache.org/jira/browse/FLINK-13481
 Project: Flink
  Issue Type: Improvement
  Components: Table SQL / Client
Affects Versions: 1.10.0
 Environment: Flink 1.10

CDH 5.13.3

 

 
Reporter: Hongtao Zhang
 Fix For: 1.10.0


Flink SQL Client active command line doesn't load the FlinkYarnSessionCli 
general options

the general options contains "addressOption" which user can specify 
--jobmanager="yarn-cluster" or -m to run the SQL on YARN Cluster

 



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308526695
 
 

 ##
 File path: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/ClickEventGenerator.java
 ##
 @@ -0,0 +1,111 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.examples.windowing.clickeventcount;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEvent;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventSerializationSchema;
+
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.ProducerConfig;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.common.serialization.ByteArraySerializer;
+
+import java.util.Arrays;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+import static 
org.apache.flink.streaming.examples.windowing.clickeventcount.ClickEventCount.WINDOW_SIZE;
+
+/**
+ * A generator which pushes {@link ClickEvent}s into a Kafka Topic configured 
via `--topic` and
+ * `--bootstrap.servers`.
+ *
+ *  The generator creates the same number of {@link ClickEvent}s for all 
pages. The delay between
+ * events is chosen such that processing time and event time roughly align. 
The generator always
+ * creates the same sequence of events. 
+ *
+ */
+public class ClickEventGenerator {
+
+   public static final int EVENTS_PER_WINDOW = 1000;
+
+   private static final List pages = Arrays.asList("/help", 
"/index", "/shop", "/jobs", "/about", "/news");
+
+   private static KafkaProducer producer;
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-13486) AsyncDataStreamITCase.testOrderedWaitUsingAnonymousFunction instable on Travis

2019-07-29 Thread Tzu-Li (Gordon) Tai (JIRA)
Tzu-Li (Gordon) Tai created FLINK-13486:
---

 Summary: 
AsyncDataStreamITCase.testOrderedWaitUsingAnonymousFunction instable on Travis
 Key: FLINK-13486
 URL: https://issues.apache.org/jira/browse/FLINK-13486
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream, Tests
Reporter: Tzu-Li (Gordon) Tai


https://api.travis-ci.org/v3/job/562526494/log.txt

{code}
15:09:27.608 [ERROR] 
testOrderedWaitUsingAnonymousFunction(org.apache.flink.streaming.api.scala.AsyncDataStreamITCase)
  Time elapsed: 1.315 s  <<< ERROR!
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at 
org.apache.flink.streaming.api.scala.AsyncDataStreamITCase.executeAndValidate(AsyncDataStreamITCase.scala:81)
at 
org.apache.flink.streaming.api.scala.AsyncDataStreamITCase.testAsyncWaitUsingAnonymousFunction(AsyncDataStreamITCase.scala:135)
at 
org.apache.flink.streaming.api.scala.AsyncDataStreamITCase.testOrderedWaitUsingAnonymousFunction(AsyncDataStreamITCase.scala:92)
Caused by: java.lang.Exception: An async function call terminated with an 
exception. Failing the AsyncWaitOperator.
Caused by: java.util.concurrent.ExecutionException: 
java.util.concurrent.TimeoutException: Async function call has timed out.
Caused by: java.util.concurrent.TimeoutException: Async function call has timed 
out.
{code}



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308526556
 
 

 ##
 File path: 
docs/getting-started/docker-playgrounds/interactive_sql_playground.md
 ##
 @@ -0,0 +1,30 @@
+---
+title: "Interactive SQL Playground"
+nav-title: 'Interactive SQL Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 2
+---
+
+
+* This will be replaced by the TOC
+{:toc}
+
+This section will describe how to deploy and use a `docker-compose`-based 
playground centered around
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308530317
 
 

 ##
 File path: 
flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/windowing/clickeventcount/ClickEventCount.java
 ##
 @@ -0,0 +1,105 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.examples.windowing.clickeventcount;
+
+import org.apache.flink.api.java.utils.ParameterTool;
+import org.apache.flink.streaming.api.TimeCharacteristic;
+import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
+import 
org.apache.flink.streaming.api.functions.timestamps.BoundedOutOfOrdernessTimestampExtractor;
+import org.apache.flink.streaming.api.windowing.time.Time;
+import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
+import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.functions.ClickEventStatisticsCollector;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.functions.CountingAggregator;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEvent;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventDeserializationSchema;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventStatistics;
+import 
org.apache.flink.streaming.examples.windowing.clickeventcount.records.ClickEventStatisticsSerializationSchema;
+
+import org.apache.kafka.clients.consumer.ConsumerConfig;
+import org.apache.kafka.clients.producer.ProducerConfig;
+
+import java.util.Properties;
+import java.util.concurrent.TimeUnit;
+
+/**
+ * A simple streaming job reading {@link ClickEvent}s from Kafka, counting 
events per minute and
+ * writing the resulting {@link ClickEventStatistics} back to Kafka.
+ *
+ *  It can be run with or without checkpointing and with event time or 
processing time semantics.
+ * 
+ *
+ *
+ */
+public class ClickEventCount {
+
+   public static final String CHECKPOINTING_OPTION = "checkpointing";
+   public static final String EVENT_TIME_OPTION = "event-time";
+
+   public static final Time WINDOW_SIZE = Time.of(60, TimeUnit.SECONDS);
+
+   public static void main(String[] args) throws Exception {
+   final ParameterTool params = ParameterTool.fromArgs(args);
+
+   final StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] [examples] Initial Version of Flink Cluster Playground

2019-07-29 Thread GitBox
knaufk commented on a change in pull request #9192: [FLINK-12749] [docs] 
[examples] Initial Version of Flink Cluster Playground
URL: https://github.com/apache/flink/pull/9192#discussion_r308530372
 
 

 ##
 File path: docs/getting-started/docker-playgrounds/flink_cluster_playground.md
 ##
 @@ -0,0 +1,680 @@
+---
+title: "Flink Cluster Playground"
+nav-title: 'Flink Cluster Playground'
+nav-parent_id: docker-playgrounds
+nav-pos: 1
+---
+
+
+There are many ways to deploy and operate Apache Flink in various 
environments. Regardless of this
+variety, the fundamental building blocks of a Flink Cluster remain the same 
and similar
+operational principles apply.
+
+This docker compose-based playground will get you started with Apache Flink 
operations quickly and
+will briefly introduce you to the main components that make up a Flink Cluster.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Anatomy of this Playground
+
+This playground consists of a long living
+[Flink Session Cluster]({{ site.baseurl 
}}/concepts/glossary.html#flink-session-cluster) and a Kafka
+Cluster.
+
+A Flink Cluster always consists of a 
+[Flink Master]({{ site.baseurl }}/concepts/glossary.html#flink-master) and one 
or more 
+[Flink TaskManagers]({{ site.baseurl 
}}/concepts/glossary.html#flink-taskmanager). The Flink Master 
+is responsible to handle Job submissions, the supervision of Jobs as well as 
resource 
+management. The Flink TaskManagers are the worker processes and are 
responsible for the execution of 
+the actual [Tasks]({{ site.baseurl }}/concepts/glossary.html#task) which make 
up a Flink Job. In 
+this playground you will start with a single TaskManager, but scale out to 
more TaskManagers later. 
+Additionally, this playground comes with a dedicated *client* container, which 
we use to submit the 
+Flink Job initially and to perform various operational tasks later on.
+
+The Kafka Cluster consists of a Zookeeper server and a Kafka Broker.
+
+
+
+When the playground is started a Flink Job called *Flink Event Count* will be 
submitted to the 
+Flink Master. Additionally, two Kafka Topics *input* and *output* are created.
+
+
+
+The Job consumes `ClickEvent`s from the *input* topic, each with a `timestamp` 
and a `page`. The 
+events are then keyed by `page` and counted in one minute 
+[windows]({{ site.baseurl }}/dev/stream/operators/windows.html). The results 
are written to the 
+*output* topic. 
+
+There are six different `page`s and the **events are generated so that each 
window contains exactly 
+one thousand records**.
+
+{% top %}
+
+## Setup
+
+{% if site.version contains "SNAPSHOT" %}
+
+  Note: The Apache Flink Docker images used for this playground are 
only available for
+  released versions of Apache Flink. Since you are currently looking at the 
latest SNAPSHOT
+  version of the documentation the branch referenced below will not exist. You 
can either change it 
+  manually or switch to the released version of the ocumentation via the 
release picker.
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   3   4   5   >