[flink] branch master updated: [FLINK-13488][tests] Harden ConnectedComponents E2E test

2019-08-14 Thread gary
This is an automated email from the ASF dual-hosted git repository.

gary pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new b5e64e7  [FLINK-13488][tests] Harden ConnectedComponents E2E test
b5e64e7 is described below

commit b5e64e7ea950c3da7cc643f2bda198603ca24129
Author: Gary Yao 
AuthorDate: Mon Aug 12 15:56:31 2019 +0200

[FLINK-13488][tests] Harden ConnectedComponents E2E test

By default the tests starts 25 TMs with a single slot each. This is not
sustainable on Travis CI. This commit changes the test so that it only 
starts 2
TMs that each offer 13 slots by default.

Run 'set -Eexuo pipefail' at the beginning of the test as recommended by the
README.md.
---
 .../test_high_parallelism_iterations.sh| 26 +++---
 1 file changed, 8 insertions(+), 18 deletions(-)

diff --git 
a/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh 
b/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh
index cfa5cae..c21dbc4 100755
--- a/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh
+++ b/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh
@@ -17,37 +17,27 @@
 # limitations under the License.
 

 
+set -Eexuo pipefail
+
 source "$(dirname "$0")"/common.sh
 
 PARALLELISM="${1:-25}"
+TM_NUM=2
+let "SLOTS_PER_TM = (PARALLELISM + TM_NUM - 1) / TM_NUM"
 
 TEST=flink-high-parallelism-iterations-test
 TEST_PROGRAM_NAME=HighParallelismIterationsTestProgram
 TEST_PROGRAM_JAR=${END_TO_END_DIR}/$TEST/target/$TEST_PROGRAM_NAME.jar
 
-set_config_key "taskmanager.heap.size" "52m" # 52Mb x 100 TMs = 5Gb total heap
-
-set_config_key "taskmanager.memory.size" "8" # 8Mb
-set_config_key "taskmanager.network.memory.min" "8mb"
-set_config_key "taskmanager.network.memory.max" "8mb"
-set_config_key "taskmanager.memory.segment-size" "8kb"
-
-set_config_key "taskmanager.network.netty.server.numThreads" "1"
-set_config_key "taskmanager.network.netty.client.numThreads" "1"
-set_config_key "taskmanager.network.request-backoff.max" "6"
-
-set_config_key "taskmanager.numberOfTaskSlots" "1"
+set_config_key "taskmanager.numberOfTaskSlots" "$SLOTS_PER_TM"
 
 print_mem_use
 start_cluster
 print_mem_use
 
-let TMNUM=$PARALLELISM-1
-echo "Start $TMNUM more task managers"
-for i in `seq 1 $TMNUM`; do
-$FLINK_DIR/bin/taskmanager.sh start
-print_mem_use
-done
+let "TM_NUM -= 1"
+start_taskmanagers ${TM_NUM}
+print_mem_use
 
 $FLINK_DIR/bin/flink run -p $PARALLELISM $TEST_PROGRAM_JAR
 print_mem_use



[flink] branch release-1.9 updated: [FLINK-13488][tests] Harden ConnectedComponents E2E test

2019-08-14 Thread gary
This is an automated email from the ASF dual-hosted git repository.

gary pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new b40dba4  [FLINK-13488][tests] Harden ConnectedComponents E2E test
b40dba4 is described below

commit b40dba4e55f8d2d3663107ecc28dae43299d3701
Author: Gary Yao 
AuthorDate: Mon Aug 12 15:56:31 2019 +0200

[FLINK-13488][tests] Harden ConnectedComponents E2E test

By default the tests starts 25 TMs with a single slot each. This is not
sustainable on Travis CI. This commit changes the test so that it only 
starts 2
TMs that each offer 13 slots by default.

Run 'set -Eexuo pipefail' at the beginning of the test as recommended by the
README.md.
---
 .../test_high_parallelism_iterations.sh| 26 +++---
 1 file changed, 8 insertions(+), 18 deletions(-)

diff --git 
a/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh 
b/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh
index cfa5cae..c21dbc4 100755
--- a/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh
+++ b/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh
@@ -17,37 +17,27 @@
 # limitations under the License.
 

 
+set -Eexuo pipefail
+
 source "$(dirname "$0")"/common.sh
 
 PARALLELISM="${1:-25}"
+TM_NUM=2
+let "SLOTS_PER_TM = (PARALLELISM + TM_NUM - 1) / TM_NUM"
 
 TEST=flink-high-parallelism-iterations-test
 TEST_PROGRAM_NAME=HighParallelismIterationsTestProgram
 TEST_PROGRAM_JAR=${END_TO_END_DIR}/$TEST/target/$TEST_PROGRAM_NAME.jar
 
-set_config_key "taskmanager.heap.size" "52m" # 52Mb x 100 TMs = 5Gb total heap
-
-set_config_key "taskmanager.memory.size" "8" # 8Mb
-set_config_key "taskmanager.network.memory.min" "8mb"
-set_config_key "taskmanager.network.memory.max" "8mb"
-set_config_key "taskmanager.memory.segment-size" "8kb"
-
-set_config_key "taskmanager.network.netty.server.numThreads" "1"
-set_config_key "taskmanager.network.netty.client.numThreads" "1"
-set_config_key "taskmanager.network.request-backoff.max" "6"
-
-set_config_key "taskmanager.numberOfTaskSlots" "1"
+set_config_key "taskmanager.numberOfTaskSlots" "$SLOTS_PER_TM"
 
 print_mem_use
 start_cluster
 print_mem_use
 
-let TMNUM=$PARALLELISM-1
-echo "Start $TMNUM more task managers"
-for i in `seq 1 $TMNUM`; do
-$FLINK_DIR/bin/taskmanager.sh start
-print_mem_use
-done
+let "TM_NUM -= 1"
+start_taskmanagers ${TM_NUM}
+print_mem_use
 
 $FLINK_DIR/bin/flink run -p $PARALLELISM $TEST_PROGRAM_JAR
 print_mem_use



[flink] branch release-1.8 updated: [FLINK-13488][tests] Harden ConnectedComponents E2E test

2019-08-14 Thread gary
This is an automated email from the ASF dual-hosted git repository.

gary pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new 481332e  [FLINK-13488][tests] Harden ConnectedComponents E2E test
481332e is described below

commit 481332e240188ce0d0e8e2074ff452e0cbcad5ee
Author: Gary Yao 
AuthorDate: Mon Aug 12 15:56:31 2019 +0200

[FLINK-13488][tests] Harden ConnectedComponents E2E test

By default the tests starts 25 TMs with a single slot each. This is not
sustainable on Travis CI. This commit changes the test so that it only 
starts 2
TMs that each offer 13 slots by default.

Run 'set -Eexuo pipefail' at the beginning of the test as recommended by the
README.md.
---
 .../test_high_parallelism_iterations.sh| 26 +++---
 1 file changed, 8 insertions(+), 18 deletions(-)

diff --git 
a/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh 
b/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh
index 93668d8..4607ae2 100755
--- a/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh
+++ b/flink-end-to-end-tests/test-scripts/test_high_parallelism_iterations.sh
@@ -17,37 +17,27 @@
 # limitations under the License.
 

 
+set -Eexuo pipefail
+
 source "$(dirname "$0")"/common.sh
 
 PARALLELISM="${1:-25}"
+TM_NUM=2
+let "SLOTS_PER_TM = (PARALLELISM + TM_NUM - 1) / TM_NUM"
 
 TEST=flink-high-parallelism-iterations-test
 TEST_PROGRAM_NAME=HighParallelismIterationsTestProgram
 TEST_PROGRAM_JAR=${END_TO_END_DIR}/$TEST/target/$TEST_PROGRAM_NAME.jar
 
-set_conf "taskmanager.heap.mb" "52" # 52Mb x 100 TMs = 5Gb total heap
-
-set_conf "taskmanager.memory.size" "8" # 8Mb
-set_conf "taskmanager.network.memory.min" "8mb"
-set_conf "taskmanager.network.memory.max" "8mb"
-set_conf "taskmanager.memory.segment-size" "8kb"
-
-set_conf "taskmanager.network.netty.server.numThreads" "1"
-set_conf "taskmanager.network.netty.client.numThreads" "1"
-set_conf "taskmanager.network.request-backoff.max" "6"
-
-set_conf "taskmanager.numberOfTaskSlots" "1"
+set_conf "taskmanager.numberOfTaskSlots" "$SLOTS_PER_TM"
 
 print_mem_use
 start_cluster
 print_mem_use
 
-let TMNUM=$PARALLELISM-1
-echo "Start $TMNUM more task managers"
-for i in `seq 1 $TMNUM`; do
-$FLINK_DIR/bin/taskmanager.sh start
-print_mem_use
-done
+let "TM_NUM -= 1"
+start_taskmanagers ${TM_NUM}
+print_mem_use
 
 $FLINK_DIR/bin/flink run -p $PARALLELISM $TEST_PROGRAM_JAR
 print_mem_use



[flink] branch release-1.9 updated: [FLINK-13501][doc] Fixes a few issues in documentation for Hive integration

2019-08-14 Thread bli
This is an automated email from the ASF dual-hosted git repository.

bli pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new d895dfe  [FLINK-13501][doc] Fixes a few issues in documentation for 
Hive integration
d895dfe is described below

commit d895dfec7b451358f9e83d1614f48e124938fc58
Author: zjuwangg 
AuthorDate: Wed Aug 14 22:31:26 2019 +0800

[FLINK-13501][doc] Fixes a few issues in documentation for Hive integration

This closes #9437.
---
 docs/dev/table/hive/index.md| 14 +++---
 docs/dev/table/hive/index.zh.md | 12 ++--
 2 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/docs/dev/table/hive/index.md b/docs/dev/table/hive/index.md
index a6dbefa..ebafb1a 100644
--- a/docs/dev/table/hive/index.md
+++ b/docs/dev/table/hive/index.md
@@ -2,7 +2,7 @@
 title: "Hive"
 nav-id: hive_tableapi
 nav-parent_id: tableapi
-nav-pos: 100
+nav-pos: 110
 is_beta: true
 nav-show_overview: true
 ---
@@ -139,9 +139,9 @@ Connect to an existing Hive installation using the Hive 
[Catalog]({{ site.baseur
 String name= "myhive";
 String defaultDatabase = "mydatabase";
 String hiveConfDir = "/opt/hive-conf";
-String version = "2.3.2"; // or 1.2.1
+String version = "2.3.4"; // or 1.2.1
 
-HiveCatalog hive new HiveCatalog(name, defaultDatabase, hiveConfDir, version);
+HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
version);
 tableEnv.registerCatalog(hive);
 {% endhighlight %}
 
@@ -151,9 +151,9 @@ tableEnv.registerCatalog(hive);
 val name= "myhive"
 val defaultDatabase = "mydatabase"
 val hiveConfDir = "/opt/hive-conf"
-val version = "2.3.2" // or 1.2.1
+val version = "2.3.4" // or 1.2.1
 
-val hive new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
+val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
 tableEnv.registerCatalog(hive)
 {% endhighlight %}
 
@@ -238,8 +238,8 @@ Currently `HiveCatalog` supports most Flink data types with 
the following mappin
 BINARY
 
 
-ARRAY\
-LIST\
+ARRAYT
+LISTT
 
 
 MAP
diff --git a/docs/dev/table/hive/index.zh.md b/docs/dev/table/hive/index.zh.md
index a6dbefa..0433a9b 100644
--- a/docs/dev/table/hive/index.zh.md
+++ b/docs/dev/table/hive/index.zh.md
@@ -139,9 +139,9 @@ Connect to an existing Hive installation using the Hive 
[Catalog]({{ site.baseur
 String name= "myhive";
 String defaultDatabase = "mydatabase";
 String hiveConfDir = "/opt/hive-conf";
-String version = "2.3.2"; // or 1.2.1
+String version = "2.3.4"; // or 1.2.1
 
-HiveCatalog hive new HiveCatalog(name, defaultDatabase, hiveConfDir, version);
+HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
version);
 tableEnv.registerCatalog(hive);
 {% endhighlight %}
 
@@ -151,9 +151,9 @@ tableEnv.registerCatalog(hive);
 val name= "myhive"
 val defaultDatabase = "mydatabase"
 val hiveConfDir = "/opt/hive-conf"
-val version = "2.3.2" // or 1.2.1
+val version = "2.3.4" // or 1.2.1
 
-val hive new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
+val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
 tableEnv.registerCatalog(hive)
 {% endhighlight %}
 
@@ -238,8 +238,8 @@ Currently `HiveCatalog` supports most Flink data types with 
the following mappin
 BINARY
 
 
-ARRAY\
-LIST\
+ARRAYT
+LISTT
 
 
 MAP



[flink] branch master updated: [FLINK-13501][doc] Fixes a few issues in documentation for Hive integration

2019-08-14 Thread bli
This is an automated email from the ASF dual-hosted git repository.

bli pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 7b4c232  [FLINK-13501][doc] Fixes a few issues in documentation for 
Hive integration
7b4c232 is described below

commit 7b4c23255b900ced0afe060194bbb737981405e2
Author: zjuwangg 
AuthorDate: Wed Aug 14 22:31:26 2019 +0800

[FLINK-13501][doc] Fixes a few issues in documentation for Hive integration

This closes #9437.
---
 docs/dev/table/hive/index.md| 14 +++---
 docs/dev/table/hive/index.zh.md | 12 ++--
 2 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/docs/dev/table/hive/index.md b/docs/dev/table/hive/index.md
index a6dbefa..ebafb1a 100644
--- a/docs/dev/table/hive/index.md
+++ b/docs/dev/table/hive/index.md
@@ -2,7 +2,7 @@
 title: "Hive"
 nav-id: hive_tableapi
 nav-parent_id: tableapi
-nav-pos: 100
+nav-pos: 110
 is_beta: true
 nav-show_overview: true
 ---
@@ -139,9 +139,9 @@ Connect to an existing Hive installation using the Hive 
[Catalog]({{ site.baseur
 String name= "myhive";
 String defaultDatabase = "mydatabase";
 String hiveConfDir = "/opt/hive-conf";
-String version = "2.3.2"; // or 1.2.1
+String version = "2.3.4"; // or 1.2.1
 
-HiveCatalog hive new HiveCatalog(name, defaultDatabase, hiveConfDir, version);
+HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
version);
 tableEnv.registerCatalog(hive);
 {% endhighlight %}
 
@@ -151,9 +151,9 @@ tableEnv.registerCatalog(hive);
 val name= "myhive"
 val defaultDatabase = "mydatabase"
 val hiveConfDir = "/opt/hive-conf"
-val version = "2.3.2" // or 1.2.1
+val version = "2.3.4" // or 1.2.1
 
-val hive new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
+val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
 tableEnv.registerCatalog(hive)
 {% endhighlight %}
 
@@ -238,8 +238,8 @@ Currently `HiveCatalog` supports most Flink data types with 
the following mappin
 BINARY
 
 
-ARRAY\
-LIST\
+ARRAYT
+LISTT
 
 
 MAP
diff --git a/docs/dev/table/hive/index.zh.md b/docs/dev/table/hive/index.zh.md
index a6dbefa..0433a9b 100644
--- a/docs/dev/table/hive/index.zh.md
+++ b/docs/dev/table/hive/index.zh.md
@@ -139,9 +139,9 @@ Connect to an existing Hive installation using the Hive 
[Catalog]({{ site.baseur
 String name= "myhive";
 String defaultDatabase = "mydatabase";
 String hiveConfDir = "/opt/hive-conf";
-String version = "2.3.2"; // or 1.2.1
+String version = "2.3.4"; // or 1.2.1
 
-HiveCatalog hive new HiveCatalog(name, defaultDatabase, hiveConfDir, version);
+HiveCatalog hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, 
version);
 tableEnv.registerCatalog(hive);
 {% endhighlight %}
 
@@ -151,9 +151,9 @@ tableEnv.registerCatalog(hive);
 val name= "myhive"
 val defaultDatabase = "mydatabase"
 val hiveConfDir = "/opt/hive-conf"
-val version = "2.3.2" // or 1.2.1
+val version = "2.3.4" // or 1.2.1
 
-val hive new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
+val hive = new HiveCatalog(name, defaultDatabase, hiveConfDir, version)
 tableEnv.registerCatalog(hive)
 {% endhighlight %}
 
@@ -238,8 +238,8 @@ Currently `HiveCatalog` supports most Flink data types with 
the following mappin
 BINARY
 
 
-ARRAY\
-LIST\
+ARRAYT
+LISTT
 
 
 MAP



[flink] branch master updated: [docs] Broken links in Hive documentation

2019-08-14 Thread bli
This is an automated email from the ASF dual-hosted git repository.

bli pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new e5dd9bc  [docs] Broken links in Hive documentation
e5dd9bc is described below

commit e5dd9bcd71201d50d68c6932d438e6e196ea466e
Author: Seth Wiesman 
AuthorDate: Tue Aug 13 19:04:44 2019 -0400

[docs] Broken links in Hive documentation

This closes #9435.
---
 docs/dev/table/hive/index.md| 2 +-
 docs/dev/table/hive/index.zh.md | 2 +-
 docs/dev/table/sqlClient.md | 2 +-
 docs/dev/table/sqlClient.zh.md  | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/docs/dev/table/hive/index.md b/docs/dev/table/hive/index.md
index ebafb1a..89511f3 100644
--- a/docs/dev/table/hive/index.md
+++ b/docs/dev/table/hive/index.md
@@ -130,7 +130,7 @@ To integrate with Hive users need the following 
dependencies in their project.
 
 ## Connecting To Hive
 
-Connect to an existing Hive installation using the Hive [Catalog]({{ 
site.baseurl }}/dev/table/catalog.html) through the table environment or YAML 
configuration.
+Connect to an existing Hive installation using the Hive [Catalog]({{ 
site.baseurl }}/dev/table/catalogs.html) through the table environment or YAML 
configuration.
 
 
 
diff --git a/docs/dev/table/hive/index.zh.md b/docs/dev/table/hive/index.zh.md
index 0433a9b..ab86482 100644
--- a/docs/dev/table/hive/index.zh.md
+++ b/docs/dev/table/hive/index.zh.md
@@ -130,7 +130,7 @@ To integrate with Hive users need the following 
dependencies in their project.
 
 ## Connecting To Hive
 
-Connect to an existing Hive installation using the Hive [Catalog]({{ 
site.baseurl }}/dev/table/catalog.html) through the table environment or YAML 
configuration.
+Connect to an existing Hive installation using the Hive [Catalog]({{ 
site.baseurl }}/dev/table/catalogs.html) through the table environment or YAML 
configuration.
 
 
 
diff --git a/docs/dev/table/sqlClient.md b/docs/dev/table/sqlClient.md
index 059cd94..24c63cb 100644
--- a/docs/dev/table/sqlClient.md
+++ b/docs/dev/table/sqlClient.md
@@ -464,7 +464,7 @@ execution:
current-database: mydb1
 {% endhighlight %}
 
-For more information about catalog, see [Catalogs]({{ site.baseurl 
}}/dev/table/catalog.html).
+For more information about catalogs, see [Catalogs]({{ site.baseurl 
}}/dev/table/catalogs.html).
 
 Detached SQL Queries
 
diff --git a/docs/dev/table/sqlClient.zh.md b/docs/dev/table/sqlClient.zh.md
index b942bf7..689ef67 100644
--- a/docs/dev/table/sqlClient.zh.md
+++ b/docs/dev/table/sqlClient.zh.md
@@ -464,7 +464,7 @@ execution:
current-database: mydb1
 {% endhighlight %}
 
-For more information about catalog, see [Catalogs]({{ site.baseurl 
}}/dev/table/catalog.html).
+For more information about catalogs, see [Catalogs]({{ site.baseurl 
}}/dev/table/catalogs.html).
 
 Detached SQL Queries
 



[flink] branch master updated: [hotfix][doc] remove obsolete catalog.md in favor of new catalogs.md

2019-08-14 Thread bli
This is an automated email from the ASF dual-hosted git repository.

bli pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new abb35b6  [hotfix][doc] remove obsolete catalog.md in favor of new 
catalogs.md
abb35b6 is described below

commit abb35b6f9bead00fe294c9f0c01aae1676e3d34c
Author: bowen.li 
AuthorDate: Wed Aug 14 16:10:19 2019 -0700

[hotfix][doc] remove obsolete catalog.md in favor of new catalogs.md
---
 docs/dev/table/catalog.zh.md | 366 ---
 1 file changed, 366 deletions(-)

diff --git a/docs/dev/table/catalog.zh.md b/docs/dev/table/catalog.zh.md
deleted file mode 100644
index a1920d8..000
--- a/docs/dev/table/catalog.zh.md
+++ /dev/null
@@ -1,366 +0,0 @@

-title: "Catalog"
-is_beta: true
-nav-parent_id: tableapi
-nav-pos: 100

-
-
-Catalogs provide metadata, such as names, schemas, statistics of tables, and 
information about how to access data stored in a database or other external 
systems. Once a catalog is registered within a `TableEnvironment`, all its 
meta-objects are accessible from the Table API and SQL queries.
-
-
-* This will be replaced by the TOC
-{:toc}
-
-
-Catalog Interface
--
-
-APIs are defined in `Catalog` interface. The interface defines a set of APIs 
to read and write catalog meta-objects such as database, tables, partitions, 
views, and functions.
-
-
-Catalog Meta-Objects Naming Structure
--
-
-Flink's catalogs use a strict two-level structure, that is, catalogs contain 
databases, and databases contain meta-objects. Thus, the full name of a 
meta-object is always structured as `catalogName`.`databaseName`.`objectName`.
-
-Each `TableEnvironment` has a `CatalogManager` to manager all registered 
catalogs. To ease access to meta-objects, `CatalogManager` has a concept of 
current catalog and current database. By setting current catalog and current 
database, users can use just the meta-object's name in their queries. This 
greatly simplifies user experience.
-
-For example, a previous query as
-
-```sql
-select * from mycatalog.mydb.myTable;
-```
-
-can be shortened to
-
-```sql
-select * from myTable;
-```
-
-To querying tables in a different database under the current catalog, users 
don't need to specify the catalog name. In our example, it would be
-
-```
-select * from mydb2.myTable2
-```
-
-`CatalogManager` always has a built-in `GenericInMemoryCatalog` named 
`default_catalog`, which has a built-in default database named 
`default_database`. If no other catalog and database are explicitly set, they 
will be the current catalog and current database by default. All temp 
meta-objects, such as those defined by `TableEnvironment#registerTable`  are 
registered to this catalog. 
-
-Users can set current catalog and database via 
`TableEnvironment.useCatalog(...)` and
-`TableEnvironment.useDatabase(...)` in Table API, or `USE CATALOG ...` and 
`USE ...` in Flink SQL
- Client.
-
-
-Catalog Types
--
-
-## GenericInMemoryCatalog
-
-The default catalog; all meta-objects in this catalog are stored in memory, 
and be will be lost once the session shuts down.
-
-Its config entry value in SQL CLI yaml file is "generic_in_memory".
-
-## HiveCatalog
-
-Flink's `HiveCatalog` can read and write both Flink and Hive meta-objects 
using Hive Metastore as persistent storage.
-
-Its config entry value in SQL CLI yaml file is "hive".
-
-### Persist Flink meta-objects
-
-Historically, Flink meta-objects are only stored in memory and are per session 
based. That means users have to recreate all the meta-objects every time they 
start a new session.
-
-To maintain meta-objects across sessions, users can choose to use 
`HiveCatalog` to persist all of users' Flink streaming (unbounded-stream) and 
batch (bounded-stream) meta-objects. Because Hive Metastore is only used for 
storage, Hive itself may not understand Flink's meta-objects stored in the 
metastore.
-
-### Integrate Flink with Hive metadata
-
-The ultimate goal for integrating Flink with Hive metadata is that:
-
-1. Existing meta-objects, like tables, views, and functions, created by Hive 
or other Hive-compatible applications can be used by Flink
-
-2. Meta-objects created by `HiveCatalog` can be written back to Hive metastore 
such that Hive and other Hive-compatible applications can consume.
-
-### Supported Hive Versions
-
-Flink's `HiveCatalog` officially supports Hive 2.3.4 and 1.2.1.
-
-The Hive version is explicitly specified as a String, either by passing it to 
the constructor when creating `HiveCatalog` instances directly in Table API or 
specifying it in yaml config file in SQL CLI. The Hive version string are 
`2.3.4` and `1.2.1`.
-
-### Case Insensitive to Meta-Object Names
-
-Note that Hive Metastore stores meta-object names in lower cases. Thus, unlike 
`GenericInMemoryCatalog`, 

[flink] branch release-1.9 updated: [hotfix][doc] remove obsolete catalog.md in favor of new catalogs.md

2019-08-14 Thread bli
This is an automated email from the ASF dual-hosted git repository.

bli pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new 2d9c4e7  [hotfix][doc] remove obsolete catalog.md in favor of new 
catalogs.md
2d9c4e7 is described below

commit 2d9c4e7b330a06b746751fc619c5b25bf40c96b0
Author: bowen.li 
AuthorDate: Wed Aug 14 16:10:19 2019 -0700

[hotfix][doc] remove obsolete catalog.md in favor of new catalogs.md
---
 docs/dev/table/catalog.zh.md | 363 ---
 1 file changed, 363 deletions(-)

diff --git a/docs/dev/table/catalog.zh.md b/docs/dev/table/catalog.zh.md
deleted file mode 100644
index c4a29cc..000
--- a/docs/dev/table/catalog.zh.md
+++ /dev/null
@@ -1,363 +0,0 @@

-title: "Catalog"
-is_beta: true
-nav-parent_id: tableapi
-nav-pos: 100

-
-
-Catalogs provide metadata, such as names, schemas, statistics of tables, and 
information about how to access data stored in a database or other external 
systems. Once a catalog is registered within a `TableEnvironment`, all its 
meta-objects are accessible from the Table API and SQL queries.
-
-
-* This will be replaced by the TOC
-{:toc}
-
-
-Catalog Interface
--
-
-APIs are defined in `Catalog` interface. The interface defines a set of APIs 
to read and write catalog meta-objects such as database, tables, partitions, 
views, and functions.
-
-
-Catalog Meta-Objects Naming Structure
--
-
-Flink's catalogs use a strict two-level structure, that is, catalogs contain 
databases, and databases contain meta-objects. Thus, the full name of a 
meta-object is always structured as `catalogName`.`databaseName`.`objectName`.
-
-Each `TableEnvironment` has a `CatalogManager` to manager all registered 
catalogs. To ease access to meta-objects, `CatalogManager` has a concept of 
current catalog and current database. By setting current catalog and current 
database, users can use just the meta-object's name in their queries. This 
greatly simplifies user experience.
-
-For example, a previous query as
-
-```sql
-select * from mycatalog.mydb.myTable;
-```
-
-can be shortened to
-
-```sql
-select * from myTable;
-```
-
-To querying tables in a different database under the current catalog, users 
don't need to specify the catalog name. In our example, it would be
-
-```
-select * from mydb2.myTable2
-```
-
-`CatalogManager` always has a built-in `GenericInMemoryCatalog` named 
`default_catalog`, which has a built-in default database named 
`default_database`. If no other catalog and database are explicitly set, they 
will be the current catalog and current database by default. All temp 
meta-objects, such as those defined by `TableEnvironment#registerTable`  are 
registered to this catalog. 
-
-Users can set current catalog and database via 
`TableEnvironment.useCatalog(...)` and `TableEnvironment.useDatabase(...)` in 
Table API, or `USE CATALOG ...` and `USE DATABASE ...` in Flink SQL.
-
-
-Catalog Types
--
-
-## GenericInMemoryCatalog
-
-The default catalog; all meta-objects in this catalog are stored in memory, 
and be will be lost once the session shuts down.
-
-Its config entry value in SQL CLI yaml file is "generic_in_memory".
-
-## HiveCatalog
-
-Flink's `HiveCatalog` can read and write both Flink and Hive meta-objects 
using Hive Metastore as persistent storage.
-
-Its config entry value in SQL CLI yaml file is "hive".
-
-### Persist Flink meta-objects
-
-Historically, Flink meta-objects are only stored in memory and are per session 
based. That means users have to recreate all the meta-objects every time they 
start a new session.
-
-To maintain meta-objects across sessions, users can choose to use 
`HiveCatalog` to persist all of users' Flink streaming (unbounded-stream) and 
batch (bounded-stream) meta-objects. Because Hive Metastore is only used for 
storage, Hive itself may not understand Flink's meta-objects stored in the 
metastore.
-
-### Integrate Flink with Hive metadata
-
-The ultimate goal for integrating Flink with Hive metadata is that:
-
-1. Existing meta-objects, like tables, views, and functions, created by Hive 
or other Hive-compatible applications can be used by Flink
-
-2. Meta-objects created by `HiveCatalog` can be written back to Hive metastore 
such that Hive and other Hive-compatible applications can consume.
-
-## User-configured Catalog
-
-Catalogs are pluggable. Users can develop custom catalogs by implementing the 
`Catalog` interface, which defines a set of APIs for reading and writing 
catalog meta-objects such as database, tables, partitions, views, and functions.
-
-
-HiveCatalog

-
-## Supported Hive Versions
-
-Flink's `HiveCatalog` officially supports Hive 2.3.4 and 1.2.1.
-
-The Hive version is explicitly specified as a String, either by passing it to 
the constructor when 

[flink] branch release-1.9 updated: [docs] Broken links in Hive documentation

2019-08-14 Thread bli
This is an automated email from the ASF dual-hosted git repository.

bli pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.9 by this push:
 new c0502ab  [docs] Broken links in Hive documentation
c0502ab is described below

commit c0502ab261c558249de43808ba262fbd4fddc517
Author: Seth Wiesman 
AuthorDate: Tue Aug 13 19:04:44 2019 -0400

[docs] Broken links in Hive documentation

This closes #9435.
---
 docs/dev/table/hive/index.md|  2 +-
 docs/dev/table/hive/index.zh.md |  2 +-
 docs/dev/table/sqlClient.md |  2 +-
 docs/dev/table/sqlClient.zh.md  | 28 
 4 files changed, 31 insertions(+), 3 deletions(-)

diff --git a/docs/dev/table/hive/index.md b/docs/dev/table/hive/index.md
index ebafb1a..89511f3 100644
--- a/docs/dev/table/hive/index.md
+++ b/docs/dev/table/hive/index.md
@@ -130,7 +130,7 @@ To integrate with Hive users need the following 
dependencies in their project.
 
 ## Connecting To Hive
 
-Connect to an existing Hive installation using the Hive [Catalog]({{ 
site.baseurl }}/dev/table/catalog.html) through the table environment or YAML 
configuration.
+Connect to an existing Hive installation using the Hive [Catalog]({{ 
site.baseurl }}/dev/table/catalogs.html) through the table environment or YAML 
configuration.
 
 
 
diff --git a/docs/dev/table/hive/index.zh.md b/docs/dev/table/hive/index.zh.md
index 0433a9b..ab86482 100644
--- a/docs/dev/table/hive/index.zh.md
+++ b/docs/dev/table/hive/index.zh.md
@@ -130,7 +130,7 @@ To integrate with Hive users need the following 
dependencies in their project.
 
 ## Connecting To Hive
 
-Connect to an existing Hive installation using the Hive [Catalog]({{ 
site.baseurl }}/dev/table/catalog.html) through the table environment or YAML 
configuration.
+Connect to an existing Hive installation using the Hive [Catalog]({{ 
site.baseurl }}/dev/table/catalogs.html) through the table environment or YAML 
configuration.
 
 
 
diff --git a/docs/dev/table/sqlClient.md b/docs/dev/table/sqlClient.md
index 059cd94..24c63cb 100644
--- a/docs/dev/table/sqlClient.md
+++ b/docs/dev/table/sqlClient.md
@@ -464,7 +464,7 @@ execution:
current-database: mydb1
 {% endhighlight %}
 
-For more information about catalog, see [Catalogs]({{ site.baseurl 
}}/dev/table/catalog.html).
+For more information about catalogs, see [Catalogs]({{ site.baseurl 
}}/dev/table/catalogs.html).
 
 Detached SQL Queries
 
diff --git a/docs/dev/table/sqlClient.zh.md b/docs/dev/table/sqlClient.zh.md
index b1d8f4c..d5bd97b 100644
--- a/docs/dev/table/sqlClient.zh.md
+++ b/docs/dev/table/sqlClient.zh.md
@@ -410,6 +410,34 @@ This process can be recursively performed until all the 
constructor parameters a
 
 {% top %}
 
+Catalogs
+
+
+Catalogs can be defined as a set of YAML properties and are automatically 
registered to the environment upon starting SQL Client.
+
+Users can specify which catalog they want to use as the current catalog in SQL 
CLI, and which database of the catalog they want to use as the current database.
+
+{% highlight yaml %}
+catalogs:
+   - name: catalog_1
+ type: hive
+ property-version: 1
+ default-database: mydb2
+ hive-version: 1.2.1
+ hive-conf-dir: 
+   - name: catalog_2
+ type: hive
+ property-version: 1
+ hive-conf-dir: 
+
+execution:
+   ...
+   current-catalog: catalog_1
+   current-database: mydb1
+{% endhighlight %}
+
+For more information about catalogs, see [Catalogs]({{ site.baseurl 
}}/dev/table/catalogs.html).
+
 Detached SQL Queries
 
 



[flink] branch master updated: [FLINK-13663][e2e] Double curl retries count and total time for Kafka downloads

2019-08-14 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new a315771  [FLINK-13663][e2e] Double curl retries count and total time 
for Kafka downloads
a315771 is described below

commit a3157710fe8267f689ae9a6f4f2338b97ae2d8c0
Author: Aleksey Pak 
AuthorDate: Tue Aug 13 20:47:02 2019 +0200

[FLINK-13663][e2e] Double curl retries count and total time for Kafka 
downloads

This closes #9429.
---
 flink-end-to-end-tests/test-scripts/kafka-common.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/flink-end-to-end-tests/test-scripts/kafka-common.sh 
b/flink-end-to-end-tests/test-scripts/kafka-common.sh
index 4664853..96ac49b 100644
--- a/flink-end-to-end-tests/test-scripts/kafka-common.sh
+++ b/flink-end-to-end-tests/test-scripts/kafka-common.sh
@@ -36,7 +36,7 @@ function setup_kafka_dist {
   mkdir -p $TEST_DATA_DIR
   
KAFKA_URL="https://archive.apache.org/dist/kafka/$KAFKA_VERSION/kafka_2.11-$KAFKA_VERSION.tgz;
   echo "Downloading Kafka from $KAFKA_URL"
-  curl "$KAFKA_URL" --retry 5 --retry-max-time 60 > $TEST_DATA_DIR/kafka.tgz
+  curl "$KAFKA_URL" --retry 10 --retry-max-time 120 > $TEST_DATA_DIR/kafka.tgz
 
   tar xzf $TEST_DATA_DIR/kafka.tgz -C $TEST_DATA_DIR/
 



[flink] branch release-1.9 updated (b40dba4 -> f4af5a8)

2019-08-14 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git.


from b40dba4  [FLINK-13488][tests] Harden ConnectedComponents E2E test
 add f4af5a8  [FLINK-13663][e2e] Double curl retries count and total time 
for Kafka downloads

No new revisions were added by this update.

Summary of changes:
 flink-end-to-end-tests/test-scripts/kafka-common.sh | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)



[flink] branch release-1.9 updated (f4af5a8 -> ae1effc)

2019-08-14 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git.


from f4af5a8  [FLINK-13663][e2e] Double curl retries count and total time 
for Kafka downloads
 new c87dcac  [FLINK-13585][tests] Harden TaskAsyncCallTest by fixing race 
condition
 new ae1effc  [hotfix][tests] Fix code style error of TaskAsyncCallTest

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../flink/runtime/taskmanager/TaskAsyncCallTest.java   | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)



[flink] 02/02: [hotfix][tests] Fix code style error of TaskAsyncCallTest

2019-08-14 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git

commit ae1effc4deb1870c5109dd86ec60bd94e97d09b3
Author: ifndef-SleePy 
AuthorDate: Wed Aug 14 20:51:08 2019 +0800

[hotfix][tests] Fix code style error of TaskAsyncCallTest
---
 .../apache/flink/runtime/taskmanager/TaskAsyncCallTest.java| 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
index ef1d816..efd7ddb 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
@@ -41,9 +41,7 @@ import 
org.apache.flink.runtime.executiongraph.TaskInformation;
 import org.apache.flink.runtime.filecache.FileCache;
 import org.apache.flink.runtime.io.disk.iomanager.IOManager;
 import org.apache.flink.runtime.io.network.NettyShuffleEnvironmentBuilder;
-import org.apache.flink.runtime.shuffle.ShuffleEnvironment;
 import org.apache.flink.runtime.io.network.TaskEventDispatcher;
-import org.apache.flink.runtime.taskexecutor.PartitionProducerStateChecker;
 import 
org.apache.flink.runtime.io.network.partition.NoOpResultPartitionConsumableNotifier;
 import 
org.apache.flink.runtime.io.network.partition.ResultPartitionConsumableNotifier;
 import org.apache.flink.runtime.jobgraph.JobVertexID;
@@ -53,8 +51,10 @@ import org.apache.flink.runtime.memory.MemoryManager;
 import org.apache.flink.runtime.metrics.groups.TaskMetricGroup;
 import org.apache.flink.runtime.metrics.groups.UnregisteredMetricGroups;
 import org.apache.flink.runtime.query.KvStateRegistry;
+import org.apache.flink.runtime.shuffle.ShuffleEnvironment;
 import org.apache.flink.runtime.state.TestTaskStateManager;
 import org.apache.flink.runtime.taskexecutor.KvStateService;
+import org.apache.flink.runtime.taskexecutor.PartitionProducerStateChecker;
 import org.apache.flink.runtime.taskexecutor.TestGlobalAggregateManager;
 import org.apache.flink.runtime.util.TestingTaskManagerRuntimeInfo;
 import org.apache.flink.util.SerializedValue;
@@ -80,6 +80,9 @@ import static org.mockito.Matchers.any;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
+/**
+ * Testing asynchronous call of {@link Task}.
+ */
 public class TaskAsyncCallTest extends TestLogger {
 
/** Number of expected checkpoints. */
@@ -265,6 +268,9 @@ public class TaskAsyncCallTest extends TestLogger {
executor);
}
 
+   /**
+* Invokable for testing checkpoints.
+*/
public static class CheckpointsInOrderInvokable extends 
AbstractInvokable {
 
private volatile long lastCheckpointId = 0;



[flink] 01/02: [FLINK-13585][tests] Harden TaskAsyncCallTest by fixing race condition

2019-08-14 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.9
in repository https://gitbox.apache.org/repos/asf/flink.git

commit c87dcac6fb6ed99faae38108023f8633e7a0f255
Author: ifndef-SleePy 
AuthorDate: Wed Aug 14 20:46:52 2019 +0800

[FLINK-13585][tests] Harden TaskAsyncCallTest by fixing race condition

This closes #9436.
---
 .../java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
index f7b366b..ef1d816 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
@@ -196,9 +196,9 @@ public class TaskAsyncCallTest extends TestLogger {
triggerLatch.await();
 
task.notifyCheckpointComplete(1);
-   task.cancelExecution();
-
notifyCheckpointCompleteLatch.await();
+
+   task.cancelExecution();
stopLatch.await();
 
assertThat(classLoaders, 
hasSize(greaterThanOrEqualTo(2)));



[flink] 01/02: [FLINK-13585][tests] Harden TaskAsyncCallTest by fixing race condition

2019-08-14 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit f5fb2c036e69870c45c3733fe5ea4ead1a52d918
Author: ifndef-SleePy 
AuthorDate: Wed Aug 14 20:46:52 2019 +0800

[FLINK-13585][tests] Harden TaskAsyncCallTest by fixing race condition

This closes #9436.
---
 .../java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java  | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
index f7b366b..ef1d816 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
@@ -196,9 +196,9 @@ public class TaskAsyncCallTest extends TestLogger {
triggerLatch.await();
 
task.notifyCheckpointComplete(1);
-   task.cancelExecution();
-
notifyCheckpointCompleteLatch.await();
+
+   task.cancelExecution();
stopLatch.await();
 
assertThat(classLoaders, 
hasSize(greaterThanOrEqualTo(2)));



[flink] 02/02: [hotfix][tests] Fix code style error of TaskAsyncCallTest

2019-08-14 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 428ce1b938813fba287a51bf86e6c52ef54453cb
Author: ifndef-SleePy 
AuthorDate: Wed Aug 14 20:51:08 2019 +0800

[hotfix][tests] Fix code style error of TaskAsyncCallTest
---
 .../apache/flink/runtime/taskmanager/TaskAsyncCallTest.java| 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
index ef1d816..efd7ddb 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/taskmanager/TaskAsyncCallTest.java
@@ -41,9 +41,7 @@ import 
org.apache.flink.runtime.executiongraph.TaskInformation;
 import org.apache.flink.runtime.filecache.FileCache;
 import org.apache.flink.runtime.io.disk.iomanager.IOManager;
 import org.apache.flink.runtime.io.network.NettyShuffleEnvironmentBuilder;
-import org.apache.flink.runtime.shuffle.ShuffleEnvironment;
 import org.apache.flink.runtime.io.network.TaskEventDispatcher;
-import org.apache.flink.runtime.taskexecutor.PartitionProducerStateChecker;
 import 
org.apache.flink.runtime.io.network.partition.NoOpResultPartitionConsumableNotifier;
 import 
org.apache.flink.runtime.io.network.partition.ResultPartitionConsumableNotifier;
 import org.apache.flink.runtime.jobgraph.JobVertexID;
@@ -53,8 +51,10 @@ import org.apache.flink.runtime.memory.MemoryManager;
 import org.apache.flink.runtime.metrics.groups.TaskMetricGroup;
 import org.apache.flink.runtime.metrics.groups.UnregisteredMetricGroups;
 import org.apache.flink.runtime.query.KvStateRegistry;
+import org.apache.flink.runtime.shuffle.ShuffleEnvironment;
 import org.apache.flink.runtime.state.TestTaskStateManager;
 import org.apache.flink.runtime.taskexecutor.KvStateService;
+import org.apache.flink.runtime.taskexecutor.PartitionProducerStateChecker;
 import org.apache.flink.runtime.taskexecutor.TestGlobalAggregateManager;
 import org.apache.flink.runtime.util.TestingTaskManagerRuntimeInfo;
 import org.apache.flink.util.SerializedValue;
@@ -80,6 +80,9 @@ import static org.mockito.Matchers.any;
 import static org.mockito.Mockito.mock;
 import static org.mockito.Mockito.when;
 
+/**
+ * Testing asynchronous call of {@link Task}.
+ */
 public class TaskAsyncCallTest extends TestLogger {
 
/** Number of expected checkpoints. */
@@ -265,6 +268,9 @@ public class TaskAsyncCallTest extends TestLogger {
executor);
}
 
+   /**
+* Invokable for testing checkpoints.
+*/
public static class CheckpointsInOrderInvokable extends 
AbstractInvokable {
 
private volatile long lastCheckpointId = 0;



[flink] branch master updated (a315771 -> 428ce1b)

2019-08-14 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from a315771  [FLINK-13663][e2e] Double curl retries count and total time 
for Kafka downloads
 new f5fb2c0  [FLINK-13585][tests] Harden TaskAsyncCallTest by fixing race 
condition
 new 428ce1b  [hotfix][tests] Fix code style error of TaskAsyncCallTest

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../flink/runtime/taskmanager/TaskAsyncCallTest.java   | 14 ++
 1 file changed, 10 insertions(+), 4 deletions(-)