[flink-web] branch asf-site updated (8730f94 -> 357b4c6)

2019-04-30 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a change to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git.


from 8730f94  [FLINK-7391] Normalize release entries in Downloads page
 new 127e99b  Remove completed item (removal of Hadoop convenience builds) 
from roadmap.
 new 357b4c6  Rebuild website

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 content/roadmap.html| 5 -
 content/zh/roadmap.html | 5 -
 roadmap.md  | 4 
 roadmap.zh.md   | 4 
 4 files changed, 18 deletions(-)



[flink-web] 02/02: Rebuild website

2019-04-30 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 357b4c6ac576f97133c068f5259dd01bb8c7f8af
Author: Fabian Hueske 
AuthorDate: Tue Apr 30 16:32:07 2019 +0200

Rebuild website
---
 content/roadmap.html| 5 -
 content/zh/roadmap.html | 5 -
 2 files changed, 10 deletions(-)

diff --git a/content/roadmap.html b/content/roadmap.html
index b3760f8..adba88f 100644
--- a/content/roadmap.html
+++ b/content/roadmap.html
@@ -346,11 +346,6 @@ metastore and Hive UDF support https://issues.apache.org/jira/browse/FL
 
 
   
-We are changing the build setup to not bundle Hadoop by default, but 
rather offer pre-packaged Hadoop
-libraries for the use with Yarn, HDFS, etc. as convenience downloads
-https://issues.apache.org/jira/browse/FLINK-11266;>FLINK-11266.
-  
-  
 The Flink code base is being updates to support Java 9, 10, and 11
 https://issues.apache.org/jira/browse/FLINK-8033;>FLINK-8033,
 https://issues.apache.org/jira/browse/FLINK-10725;>FLINK-10725.
diff --git a/content/zh/roadmap.html b/content/zh/roadmap.html
index 26dba2c..22efd3a 100644
--- a/content/zh/roadmap.html
+++ b/content/zh/roadmap.html
@@ -344,11 +344,6 @@ metastore and Hive UDF support https://issues.apache.org/jira/browse/FL
 
 
   
-We are changing the build setup to not bundle Hadoop by default, but 
rather offer pre-packaged Hadoop
-libraries for the use with Yarn, HDFS, etc. as convenience downloads
-https://issues.apache.org/jira/browse/FLINK-11266;>FLINK-11266.
-  
-  
 The Flink code base is being updates to support Java 9, 10, and 11
 https://issues.apache.org/jira/browse/FLINK-8033;>FLINK-8033,
 https://issues.apache.org/jira/browse/FLINK-10725;>FLINK-10725.



[flink-web] 01/02: Remove completed item (removal of Hadoop convenience builds) from roadmap.

2019-04-30 Thread fhueske
This is an automated email from the ASF dual-hosted git repository.

fhueske pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 127e99bfd4300c4ae272a611c9ddcb1076553478
Author: Fabian Hueske 
AuthorDate: Mon Apr 29 17:10:11 2019 +0200

Remove completed item (removal of Hadoop convenience builds) from roadmap.

This closes #203.
---
 roadmap.md| 4 
 roadmap.zh.md | 4 
 2 files changed, 8 deletions(-)

diff --git a/roadmap.md b/roadmap.md
index 7fdd3cd..2ef95c2 100644
--- a/roadmap.md
+++ b/roadmap.md
@@ -160,10 +160,6 @@ Support for additional connectors and formats is a 
continuous process.
 
 # Miscellaneous
 
-  - We are changing the build setup to not bundle Hadoop by default, but 
rather offer pre-packaged Hadoop
-libraries for the use with Yarn, HDFS, etc. as convenience downloads
-[FLINK-11266](https://issues.apache.org/jira/browse/FLINK-11266).
-
   - The Flink code base is being updates to support Java 9, 10, and 11
 [FLINK-8033](https://issues.apache.org/jira/browse/FLINK-8033),
 [FLINK-10725](https://issues.apache.org/jira/browse/FLINK-10725).
diff --git a/roadmap.zh.md b/roadmap.zh.md
index 40d0edb..6508c6e 100644
--- a/roadmap.zh.md
+++ b/roadmap.zh.md
@@ -160,10 +160,6 @@ Support for additional connectors and formats is a 
continuous process.
 
 # Miscellaneous
 
-  - We are changing the build setup to not bundle Hadoop by default, but 
rather offer pre-packaged Hadoop
-libraries for the use with Yarn, HDFS, etc. as convenience downloads
-[FLINK-11266](https://issues.apache.org/jira/browse/FLINK-11266).
-
   - The Flink code base is being updates to support Java 9, 10, and 11
 [FLINK-8033](https://issues.apache.org/jira/browse/FLINK-8033),
 [FLINK-10725](https://issues.apache.org/jira/browse/FLINK-10725).



[flink] branch master updated: [FLINK-12203] Refactor ResultPartitionManager to break tie with Task

2019-04-30 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new b62db93  [FLINK-12203] Refactor ResultPartitionManager to break tie 
with Task
b62db93 is described below

commit b62db93bf63cb3bb34dd03d611a779d9e3fc61ac
Author: Andrey Zagrebin 
AuthorDate: Thu Apr 18 15:26:24 2019 +0200

[FLINK-12203] Refactor ResultPartitionManager to break tie with Task

At the moment, we have ResultPartitionManager.releasePartitionsProducedBy 
which uses indexing by task in network environment. These methods are 
eventually used only by Task which already knows its partitions so Task can use 
ResultPartition.fail(cause) and TaskExecutor.failPartition could directly use 
NetworkEnviroment.releasePartitions(Collection). This also 
requires that JM Execution sends produced partition ids instead of just 
ExecutionAttemptID.

This closes #8210.
---
 .../flink/runtime/executiongraph/Execution.java| 20 ++---
 .../runtime/io/network/NetworkEnvironment.java | 13 ++
 .../io/network/partition/ResultPartition.java  |  2 +-
 .../network/partition/ResultPartitionManager.java  | 50 ++
 .../jobmanager/slots/TaskManagerGateway.java   |  8 ++--
 .../runtime/jobmaster/RpcTaskManagerGateway.java   |  6 ++-
 .../flink/runtime/taskexecutor/TaskExecutor.java   |  7 ++-
 .../runtime/taskexecutor/TaskExecutorGateway.java  |  8 ++--
 .../utils/SimpleAckingTaskManagerGateway.java  |  5 ++-
 .../taskexecutor/TestingTaskExecutorGateway.java   |  4 +-
 10 files changed, 65 insertions(+), 58 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java
index 63f3125..e413619 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/Execution.java
@@ -37,6 +37,7 @@ import org.apache.flink.runtime.concurrent.FutureUtils;
 import org.apache.flink.runtime.deployment.TaskDeploymentDescriptor;
 import org.apache.flink.runtime.execution.ExecutionState;
 import org.apache.flink.runtime.instance.SlotSharingGroupId;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
 import org.apache.flink.runtime.jobmanager.scheduler.CoLocationConstraint;
 import 
org.apache.flink.runtime.jobmanager.scheduler.LocationPreferenceConstraint;
 import 
org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException;
@@ -688,7 +689,7 @@ public class Execution implements AccessExecution, 
Archiveable partitions = 
vertex.getProducedPartitions().values();
+   Collection partitionIds = new 
ArrayList<>(partitions.size());
+   for (IntermediateResultPartition partition : 
partitions) {
+   partitionIds.add(new 
ResultPartitionID(partition.getPartitionId(), attemptId));
+   }
+
+   if (!partitionIds.isEmpty()) {
+   // TODO For some tests this could be a problem 
when querying too early if all resources were released
+   
taskManagerGateway.releasePartitions(partitionIds);
+   }
}
}
 
diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
index 98c61a4..0ee8595 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/io/network/NetworkEnvironment.java
@@ -26,6 +26,7 @@ import 
org.apache.flink.runtime.io.network.buffer.NetworkBufferPool;
 import org.apache.flink.runtime.io.network.netty.NettyConfig;
 import org.apache.flink.runtime.io.network.netty.NettyConnectionManager;
 import org.apache.flink.runtime.io.network.partition.ResultPartition;
+import org.apache.flink.runtime.io.network.partition.ResultPartitionID;
 import org.apache.flink.runtime.io.network.partition.ResultPartitionManager;
 import org.apache.flink.runtime.io.network.partition.consumer.SingleInputGate;
 import org.apache.flink.runtime.taskexecutor.TaskExecutor;
@@ -38,6 +39,7 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 import java.io.IOException;
+import java.util.Collection;
 import java.util.Optional;
 
 import static org.apache.flink.util.Preconditions.checkNotNull;
@@ -207,6 +209,17 @@ public class NetworkEnvironment {
}
}
 
+   /**
+* Batch release intermediate result partitions.
+*
+* @param partitionIds partition ids to release
+*/
+   public void 

[flink] branch master updated: [hotfix] regenerate rest-docs to latest code

2019-04-30 Thread trohrmann
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 3c20704  [hotfix] regenerate rest-docs to latest code
3c20704 is described below

commit 3c207041c350227743bcbe8e36111657b1dfb371
Author: Yun Tang 
AuthorDate: Sun Apr 28 23:12:13 2019 +0800

[hotfix] regenerate rest-docs to latest code
---
 docs/_includes/generated/rest_v1_dispatcher.html | 159 ++-
 1 file changed, 157 insertions(+), 2 deletions(-)

diff --git a/docs/_includes/generated/rest_v1_dispatcher.html 
b/docs/_includes/generated/rest_v1_dispatcher.html
index 36ffe51..4df103d 100644
--- a/docs/_includes/generated/rest_v1_dispatcher.html
+++ b/docs/_includes/generated/rest_v1_dispatcher.html
@@ -355,6 +355,96 @@ Using 'curl' you can upload a jar via 'curl -X POST -H 
"Expect:" -F "jarfile=@pa
 
   
 
+  /jars/:jarid/plan
+
+
+  Verb: GET
+  Response code: 200 OK
+
+
+  Returns the dataflow plan of a job contained in a jar 
previously uploaded via '/jars/upload'. Program arguments can be passed both 
via the JSON request (recommended) or query parameters.
+
+
+  Path parameters
+
+
+  
+
+jarid - String value that identifies a jar. When uploading 
the jar a path is returned, where the filename is the ID. This value is 
equivalent to the `id` field in the list of uploaded jars (/jars).
+
+  
+
+
+  Query parameters
+
+
+  
+
+program-args (optional): Deprecated, please use 'programArg' 
instead. String value that specifies the arguments for the program or plan
+programArg (optional): Comma-separated list of program 
arguments.
+entry-class (optional): String value that specifies the fully 
qualified name of the entry point class. Overrides the class defined in the jar 
file manifest.
+parallelism (optional): Positive integer value that specifies 
the desired parallelism for the job.
+
+  
+
+
+  
+Request
+
+  
+
+{
+  "type" : "object",
+  "id" : 
"urn:jsonschema:org:apache:flink:runtime:webmonitor:handlers:JarPlanRequestBody",
+  "properties" : {
+"entryClass" : {
+  "type" : "string"
+},
+"programArgs" : {
+  "type" : "string"
+},
+"programArgsList" : {
+  "type" : "array",
+  "items" : {
+"type" : "string"
+  }
+},
+"parallelism" : {
+  "type" : "integer"
+},
+"jobId" : {
+  "type" : "any"
+}
+  }
+}
+  
+ 
+  
+
+
+  
+Response
+
+  
+
+{
+  "type" : "object",
+  "id" : "urn:jsonschema:org:apache:flink:runtime:rest:messages:JobPlanInfo",
+  "properties" : {
+"plan" : {
+  "type" : "any"
+}
+  }
+}
+  
+ 
+  
+
+  
+
+
+  
+
   /jars/:jarid/run
 
 
@@ -592,7 +682,7 @@ Using 'curl' you can upload a jar via 'curl -X POST -H 
"Expect:" -F "jarfile=@pa
   },
   "status" : {
 "type" : "string",
-"enum" : [ "CREATED", "RUNNING", "FAILING", "FAILED", 
"CANCELLING", "CANCELED", "FINISHED", "RESTARTING", "SUSPENDING", "SUSPENDED", 
"RECONCILING" ]
+"enum" : [ "CREATED", "RUNNING", "FAILING", "FAILED", 
"CANCELLING", "CANCELED", "FINISHED", "RESTARTING", "SUSPENDED", "RECONCILING" ]
   }
 }
   }
@@ -829,7 +919,7 @@ Using 'curl' you can upload a jar via 'curl -X POST -H 
"Expect:" -F "jarfile=@pa
 },
 "state" : {
   "type" : "string",
-  "enum" : [ "CREATED", "RUNNING", "FAILING", "FAILED", "CANCELLING", 
"CANCELED", "FINISHED", "RESTARTING", "SUSPENDING", "SUSPENDED", "RECONCILING" ]
+  "enum" : [ "CREATED", "RUNNING", "FAILING", "FAILED", "CANCELLING", 
"CANCELED", "FINISHED", "RESTARTING", "SUSPENDED", "RECONCILING" ]
 },
 "start-time" : {
   "type" : "integer"
@@ -2295,6 +2385,71 @@ Using 'curl' you can upload a jar via 'curl -X POST -H 
"Expect:" -F "jarfile=@pa
 
   
 
+  /jobs/:jobid/stop-with-savepoint
+
+
+  Verb: POST
+  Response code: 202 Accepted
+
+
+  Stops a job with a savepoint. Optionally, it can also 
emit a MAX_WATERMARK before taking the savepoint to flush out any state waiting 
for timers to fire.
+
+
+  Path parameters
+
+
+  
+
+jobid - 32-character hexadecimal string value that identifies 
a job.
+
+  
+
+
+  
+Request
+
+  
+
+{
+  "type" : "object",
+  "id" : 
"urn:jsonschema:org:apache:flink:runtime:rest:messages:job:savepoints:stop:StopWithSavepointRequestBody",
+  "properties" : {
+"targetDirectory" : {
+  "type" : "string"
+},
+"endOfEventTime" : {
+  "type" : "boolean"
+   

[flink] branch master updated: [FLINK-12187] Bug fixes, Use BufferedWriter in a loop instead of FileWriter

2019-04-30 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new ca8145e  [FLINK-12187] Bug fixes, Use BufferedWriter in a loop instead 
of FileWriter
ca8145e is described below

commit ca8145e54607aa46b275d09e785e7c1653f0181c
Author: bd2019us 
AuthorDate: Sat Apr 13 23:33:42 2019 -0500

[FLINK-12187] Bug fixes, Use BufferedWriter in a loop instead of FileWriter
---
 .../flink/examples/java/relational/util/WebLogDataGenerator.java   | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git 
a/flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/util/WebLogDataGenerator.java
 
b/flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/util/WebLogDataGenerator.java
index f68ece1..7bc7ca9 100644
--- 
a/flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/util/WebLogDataGenerator.java
+++ 
b/flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/relational/util/WebLogDataGenerator.java
@@ -20,6 +20,7 @@ package org.apache.flink.examples.java.relational.util;
 
 import org.apache.flink.examples.java.relational.WebLogAnalysis;
 
+import java.io.BufferedWriter;
 import java.io.FileWriter;
 import java.io.IOException;
 import java.util.Calendar;
@@ -98,7 +99,7 @@ public class WebLogDataGenerator {
 
Random rand = new 
Random(Calendar.getInstance().getTimeInMillis());
 
-   try (FileWriter fw = new FileWriter(path)) {
+   try (BufferedWriter fw = new BufferedWriter(new 
FileWriter(path))) {
for (int i = 0; i < noDocs; i++) {
 
int wordsInDoc = rand.nextInt(40) + 10;
@@ -136,7 +137,7 @@ public class WebLogDataGenerator {
 
Random rand = new 
Random(Calendar.getInstance().getTimeInMillis());
 
-   try (FileWriter fw = new FileWriter(path)) {
+   try (BufferedWriter fw = new BufferedWriter(new 
FileWriter(path))) {
for (int i = 0; i < noDocs; i++) {
// Rank
StringBuilder rank = new 
StringBuilder(rand.nextInt(100) + "|");
@@ -168,7 +169,7 @@ public class WebLogDataGenerator {
 
Random rand = new 
Random(Calendar.getInstance().getTimeInMillis());
 
-   try (FileWriter fw = new FileWriter(path)) {
+   try (BufferedWriter fw = new BufferedWriter(new 
FileWriter(path))) {
for (int i = 0; i < noVisits; i++) {
 
int year = 2000 + rand.nextInt(10); // 
yearFilter 3



[flink] branch release-1.8 updated: [hotfix] Fix compile error from rebase of FLINK-12296]

2019-04-30 Thread srichter
This is an automated email from the ASF dual-hosted git repository.

srichter pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new 4d1605b  [hotfix] Fix compile error from rebase of FLINK-12296]
4d1605b is described below

commit 4d1605bf52ec03b5e01d0bb950f279a3e6da9471
Author: Stefan Richter 
AuthorDate: Tue Apr 30 11:26:34 2019 +0200

[hotfix] Fix compile error from rebase of FLINK-12296]
---
 .../org/apache/flink/test/state/StatefulOperatorChainedTaskTest.java| 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/flink-tests/src/test/java/org/apache/flink/test/state/StatefulOperatorChainedTaskTest.java
 
b/flink-tests/src/test/java/org/apache/flink/test/state/StatefulOperatorChainedTaskTest.java
index 5651929..c3e1ba9 100644
--- 
a/flink-tests/src/test/java/org/apache/flink/test/state/StatefulOperatorChainedTaskTest.java
+++ 
b/flink-tests/src/test/java/org/apache/flink/test/state/StatefulOperatorChainedTaskTest.java
@@ -176,7 +176,7 @@ public class StatefulOperatorChainedTaskTest {
 
testHarness.getTaskStateManager().setWaitForReportLatch(new 
OneShotLatch());
 
-   while (!streamTask.triggerCheckpoint(checkpointMetaData, 
CheckpointOptions.forCheckpointWithDefaultLocation(), false)) {}
+   while (!streamTask.triggerCheckpoint(checkpointMetaData, 
CheckpointOptions.forCheckpointWithDefaultLocation())) {}
 

testHarness.getTaskStateManager().getWaitForReportLatch().await();
long reportedCheckpointId = 
testHarness.getTaskStateManager().getReportedCheckpointId();



[flink] branch master updated: [FLINK-12238][hive] Support database related operations in GenericHiveMetastoreCatalog and setup flink-connector-hive module

2019-04-30 Thread kurt
This is an automated email from the ASF dual-hosted git repository.

kurt pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 39d8236  [FLINK-12238][hive] Support database related operations in 
GenericHiveMetastoreCatalog and setup flink-connector-hive module
39d8236 is described below

commit 39d82368eca3891d27c85dd9bc56db2344ee73ba
Author: Bowen L 
AuthorDate: Tue Apr 30 01:19:39 2019 -0700

[FLINK-12238][hive] Support database related operations in 
GenericHiveMetastoreCatalog and setup flink-connector-hive module

This closes #8205
---
 flink-connectors/flink-connector-hive/pom.xml  | 429 +
 .../catalog/hive/GenericHiveMetastoreCatalog.java  | 352 +
 .../hive/GenericHiveMetastoreCatalogUtil.java  |  49 +++
 .../src/main/resources/META-INF/NOTICE |  26 ++
 .../main/resources/META-INF/licenses/LICENSE.antlr |  38 ++
 .../hive/GenericHiveMetastoreCatalogTest.java  |  83 
 .../flink/table/catalog/hive/HiveTestUtils.java|  54 +++
 .../src/test/resources/hive-site.xml   |  42 ++
 .../src/test/resources/log4j-test.properties   |  24 ++
 flink-connectors/pom.xml   |   1 +
 flink-table/flink-table-api-java/pom.xml   |  10 +
 .../table/catalog/GenericCatalogDatabase.java  |   2 +-
 .../table/catalog/GenericInMemoryCatalogTest.java  | 248 +++-
 .../flink/table/catalog/CatalogTestBase.java   | 241 
 14 files changed, 1396 insertions(+), 203 deletions(-)

diff --git a/flink-connectors/flink-connector-hive/pom.xml 
b/flink-connectors/flink-connector-hive/pom.xml
new file mode 100644
index 000..cb09934
--- /dev/null
+++ b/flink-connectors/flink-connector-hive/pom.xml
@@ -0,0 +1,429 @@
+
+
+http://maven.apache.org/POM/4.0.0;
+xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
+xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;>
+
+   4.0.0
+
+   
+   org.apache.flink
+   flink-connectors
+   1.9-SNAPSHOT
+   ..
+   
+
+   flink-connector-hive_${scala.binary.version}
+   flink-connector-hive
+
+   jar
+
+   
+   2.3.4
+   
2.7.2
+   
+
+   
+
+   
+
+   
+   org.apache.flink
+   flink-table-common
+   ${project.version}
+   provided
+   
+
+   
+   org.apache.flink
+   flink-table-api-java
+   ${project.version}
+   provided
+   
+
+   
+   
+
+   
+   org.apache.hadoop
+   hadoop-common
+   ${hivemetastore.hadoop.version}
+   provided
+   
+
+   
+   org.apache.hadoop
+   hadoop-mapreduce-client-core
+   ${hivemetastore.hadoop.version}
+   provided
+   
+
+   
+   
+
+   
+   org.apache.hive
+   hive-metastore
+   ${hive.version}
+   
+   
+   org.apache.hive
+   hive-shims
+   
+   
+   javolution
+   javolution
+   
+   
+   com.google.guava
+   guava
+   
+   
+   com.google.protobuf
+   protobuf-java
+   
+   
+   org.apache.derby
+   derby
+   
+   
+   org.apache.hbase
+   hbase-client
+   
+   
+   commons-lang
+   commons-lang
+   
+   
+   com.zaxxer
+   HikariCP
+   
+   
+   javax.jdo
+

[flink] branch master updated: [FLINK-12306][Runtime] Change the name of log into LOG

2019-04-30 Thread rmetzger
This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new 0632600  [FLINK-12306][Runtime] Change the name of log into LOG
0632600 is described below

commit 06326007da509a6dcd3c64c13f7286d0bbcb6179
Author: Zi Li <820972...@qq.com>
AuthorDate: Sat Apr 27 10:31:09 2019 +0800

[FLINK-12306][Runtime] Change the name of log into LOG
---
 .../java/org/apache/flink/runtime/rpc/MainThreadValidatorUtil.java| 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadValidatorUtil.java
 
b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadValidatorUtil.java
index c2e2b7d..b380e2d 100644
--- 
a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadValidatorUtil.java
+++ 
b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/MainThreadValidatorUtil.java
@@ -34,7 +34,7 @@ import static 
org.apache.flink.util.Preconditions.checkNotNull;
  */
 public final class MainThreadValidatorUtil {
 
-   private static final Logger log = 
LoggerFactory.getLogger(MainThreadValidatorUtil.class);
+   private static final Logger LOG = 
LoggerFactory.getLogger(MainThreadValidatorUtil.class);
 
private final RpcEndpoint endpoint;
 
@@ -65,7 +65,7 @@ public final class MainThreadValidatorUtil {
String violationMsg = "Violation of main thread 
constraint detected: expected <"
+ expected + "> but running in <" + actual + 
">.";
 
-   log.warn(violationMsg, new Exception(violationMsg));
+   LOG.warn(violationMsg, new Exception(violationMsg));
 
return false;
}



[flink] branch master updated (0632600 -> f98a7a8)

2019-04-30 Thread chesnay
This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git.


from 0632600  [FLINK-12306][Runtime] Change the name of log into LOG
 new f1b2e9b  [hotfix][metrics] Remove legacy/unused code
 new 0af34bd  [hotfix][tests][runtime] Extract utilities for creating 
InputChannels
 new f98a7a8  [FLINK-12199][metrics][network] Decouple network metrics from 
Task

The 16461 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .../webmonitor/history/HistoryServerTest.java  |   7 +-
 .../flink/runtime/executiongraph/IOMetrics.java|  70 +
 .../partition/consumer/InputChannelMetrics.java|  73 +
 .../partition/consumer/LocalInputChannel.java  |   5 +-
 .../partition/consumer/RemoteInputChannel.java |   5 +-
 .../partition/consumer/SingleInputGate.java|  14 +-
 .../partition/consumer/UnknownInputChannel.java|   5 +-
 .../apache/flink/runtime/metrics/MetricNames.java  |  11 +-
 .../runtime/metrics/groups/TaskIOMetricGroup.java  |  53 ++-
 .../rest/handler/job/JobDetailsHandler.java|   4 +-
 .../rest/handler/job/JobVertexDetailsHandler.java  |   4 +-
 .../handler/job/JobVertexTaskManagersHandler.java  |   4 +-
 .../rest/handler/util/MutableIOMetrics.java|  76 ++
 .../job/SubtaskExecutionAttemptDetailsInfo.java|   4 +-
 .../org/apache/flink/runtime/taskmanager/Task.java |   5 +-
 .../ExecutionGraphDeploymentTest.java  |   6 +-
 .../runtime/io/network/NetworkEnvironmentTest.java |   3 +-
 .../netty/PartitionRequestClientHandlerTest.java   |   4 +-
 .../network/partition/InputChannelTestUtils.java   |  71 +
 .../network/partition/InputGateConcurrentTest.java |  28 ++--
 .../network/partition/InputGateFairnessTest.java   |  33 ++--
 .../partition/consumer/LocalInputChannelTest.java  |  53 ++-
 .../partition/consumer/RemoteInputChannelTest.java |  14 +-
 .../partition/consumer/SingleInputGateTest.java|  24 +--
 .../metrics/groups/TaskIOMetricGroupTest.java  |  10 +-
 .../SubtaskCurrentAttemptDetailsHandlerTest.java   |  15 +-
 .../SubtaskExecutionAttemptDetailsHandlerTest.java |  15 +-
 .../legacy/utils/ArchivedExecutionBuilder.java |   3 +-
 .../legacy/utils/ArchivedJobGenerationUtils.java   | 167 -
 .../runtime/taskmanager/TaskAsyncCallTest.java |   5 +-
 .../apache/flink/runtime/taskmanager/TaskTest.java |   5 +-
 .../StreamNetworkBenchmarkEnvironment.java |   6 +-
 .../runtime/tasks/SynchronousCheckpointITCase.java |   5 +-
 33 files changed, 286 insertions(+), 521 deletions(-)
 create mode 100644 
flink-runtime/src/main/java/org/apache/flink/runtime/io/network/partition/consumer/InputChannelMetrics.java
 delete mode 100644 
flink-runtime/src/test/java/org/apache/flink/runtime/rest/handler/legacy/utils/ArchivedJobGenerationUtils.java



[flink] branch release-1.8 updated: [FLINK-12296][StateBackend] Fix local state directory collision with state loss for chained keyed operators

2019-04-30 Thread srichter
This is an automated email from the ASF dual-hosted git repository.

srichter pushed a commit to branch release-1.8
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.8 by this push:
 new 531d727  [FLINK-12296][StateBackend] Fix local state directory 
collision with state loss for chained keyed operators
531d727 is described below

commit 531d727f9b32c310d8d63b253019b8cc4a23a3eb
Author: klion26 
AuthorDate: Wed Apr 24 04:52:03 2019 +0200

[FLINK-12296][StateBackend] Fix local state directory collision with state 
loss for chained keyed operators

- Change
Will change the local data path from

`.../local_state_root/allocatio_id/job_id/jobvertext_id_subtask_id/chk_id/rocksdb`
to

`.../local_state_root/allocatio_id/job_id/jobvertext_id_subtask_id/chk_id/operator_id`

When preparing the local directory Flink deletes the local directory for 
each subtask if it already exists,
If more than one stateful operators chained in a single task, they'll share 
the same local directory path,
then the local directory will be deleted unexpectedly, and the we'll get 
data loss.

This closes #8263.

(cherry picked from commit ee60846dc588b1a832a497ff9522d7a3a282c350)
---
 .../CheckpointStreamWithResultProviderTest.java|   3 +
 .../state/StateSnapshotCompressionTest.java|   2 +-
 .../ttl/mock/MockKeyedStateBackendBuilder.java |   1 +
 .../runtime/state/ttl/mock/MockStateBackend.java   |   2 +-
 .../state/RocksDBKeyedStateBackendBuilder.java |   1 +
 .../snapshot/RocksIncrementalSnapshotStrategy.java |  17 +-
 .../tasks/OneInputStreamTaskTestHarness.java   |  50 +++-
 .../runtime/tasks/StreamConfigChainer.java |  23 +-
 .../runtime/tasks/StreamMockEnvironment.java   |   8 +-
 .../runtime/tasks/StreamTaskTestHarness.java   |  21 +-
 .../state/StatefulOperatorChainedTaskTest.java | 260 +
 11 files changed, 369 insertions(+), 19 deletions(-)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/CheckpointStreamWithResultProviderTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/CheckpointStreamWithResultProviderTest.java
index 2af25d9..57653e2 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/CheckpointStreamWithResultProviderTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/CheckpointStreamWithResultProviderTest.java
@@ -35,6 +35,9 @@ import java.io.Closeable;
 import java.io.File;
 import java.io.IOException;
 
+/**
+ * Test for CheckpointStreamWithResultProvider.
+ */
 public class CheckpointStreamWithResultProviderTest extends TestLogger {
 
private static TemporaryFolder temporaryFolder;
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/StateSnapshotCompressionTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/StateSnapshotCompressionTest.java
index a10be26..de687ff 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/StateSnapshotCompressionTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/StateSnapshotCompressionTest.java
@@ -18,7 +18,6 @@
 
 package org.apache.flink.runtime.state;
 
-import org.apache.commons.io.IOUtils;
 import org.apache.flink.api.common.ExecutionConfig;
 import org.apache.flink.api.common.state.ValueStateDescriptor;
 import org.apache.flink.api.common.typeutils.base.StringSerializer;
@@ -34,6 +33,7 @@ import 
org.apache.flink.runtime.state.memory.MemCheckpointStreamFactory;
 import org.apache.flink.runtime.state.ttl.TtlTimeProvider;
 import org.apache.flink.util.TestLogger;
 
+import org.apache.commons.io.IOUtils;
 import org.junit.Assert;
 import org.junit.Test;
 
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackendBuilder.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackendBuilder.java
index efef923..2196dc9 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackendBuilder.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackendBuilder.java
@@ -30,6 +30,7 @@ import 
org.apache.flink.runtime.state.StreamCompressionDecorator;
 import org.apache.flink.runtime.state.ttl.TtlTimeProvider;
 
 import javax.annotation.Nonnull;
+
 import java.util.Collection;
 import java.util.HashMap;
 import java.util.Map;
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockStateBackend.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockStateBackend.java
index bdf07bf..f50f1b6 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockStateBackend.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockStateBackend.java
@@ -35,8 +35,8 @@ 

[flink] branch master updated: [FLINK-12296][StateBackend] Fix local state directory collision with state loss for chained keyed operators

2019-04-30 Thread srichter
This is an automated email from the ASF dual-hosted git repository.

srichter pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
 new ee60846  [FLINK-12296][StateBackend] Fix local state directory 
collision with state loss for chained keyed operators
ee60846 is described below

commit ee60846dc588b1a832a497ff9522d7a3a282c350
Author: klion26 
AuthorDate: Wed Apr 24 10:52:03 2019 +0800

[FLINK-12296][StateBackend] Fix local state directory collision with state 
loss for chained keyed operators

- Change
Will change the local data path from

`.../local_state_root/allocatio_id/job_id/jobvertext_id_subtask_id/chk_id/rocksdb`
to

`.../local_state_root/allocatio_id/job_id/jobvertext_id_subtask_id/chk_id/operator_id`

When preparing the local directory Flink deletes the local directory for 
each subtask if it already exists,
If more than one stateful operators chained in a single task, they'll share 
the same local directory path,
then the local directory will be deleted unexpectedly, and the we'll get 
data loss.

This closes #8263.
---
 .../CheckpointStreamWithResultProviderTest.java|   3 +
 .../state/StateSnapshotCompressionTest.java|   2 +-
 .../ttl/mock/MockKeyedStateBackendBuilder.java |   1 +
 .../runtime/state/ttl/mock/MockStateBackend.java   |   2 +-
 .../state/RocksDBKeyedStateBackendBuilder.java |   1 +
 .../snapshot/RocksIncrementalSnapshotStrategy.java |  17 +-
 .../tasks/OneInputStreamTaskTestHarness.java   |  50 +++-
 .../runtime/tasks/StreamConfigChainer.java |  23 +-
 .../runtime/tasks/StreamMockEnvironment.java   |   8 +-
 .../runtime/tasks/StreamTaskTestHarness.java   |  21 +-
 .../state/StatefulOperatorChainedTaskTest.java | 260 +
 11 files changed, 369 insertions(+), 19 deletions(-)

diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/CheckpointStreamWithResultProviderTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/CheckpointStreamWithResultProviderTest.java
index 2af25d9..57653e2 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/CheckpointStreamWithResultProviderTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/CheckpointStreamWithResultProviderTest.java
@@ -35,6 +35,9 @@ import java.io.Closeable;
 import java.io.File;
 import java.io.IOException;
 
+/**
+ * Test for CheckpointStreamWithResultProvider.
+ */
 public class CheckpointStreamWithResultProviderTest extends TestLogger {
 
private static TemporaryFolder temporaryFolder;
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/StateSnapshotCompressionTest.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/StateSnapshotCompressionTest.java
index a10be26..de687ff 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/StateSnapshotCompressionTest.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/StateSnapshotCompressionTest.java
@@ -18,7 +18,6 @@
 
 package org.apache.flink.runtime.state;
 
-import org.apache.commons.io.IOUtils;
 import org.apache.flink.api.common.ExecutionConfig;
 import org.apache.flink.api.common.state.ValueStateDescriptor;
 import org.apache.flink.api.common.typeutils.base.StringSerializer;
@@ -34,6 +33,7 @@ import 
org.apache.flink.runtime.state.memory.MemCheckpointStreamFactory;
 import org.apache.flink.runtime.state.ttl.TtlTimeProvider;
 import org.apache.flink.util.TestLogger;
 
+import org.apache.commons.io.IOUtils;
 import org.junit.Assert;
 import org.junit.Test;
 
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackendBuilder.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackendBuilder.java
index 3ffe183..8ec9b4d 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackendBuilder.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockKeyedStateBackendBuilder.java
@@ -31,6 +31,7 @@ import 
org.apache.flink.runtime.state.heap.InternalKeyContextImpl;
 import org.apache.flink.runtime.state.ttl.TtlTimeProvider;
 
 import javax.annotation.Nonnull;
+
 import java.util.Collection;
 import java.util.HashMap;
 import java.util.Map;
diff --git 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockStateBackend.java
 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockStateBackend.java
index bdf07bf..f50f1b6 100644
--- 
a/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockStateBackend.java
+++ 
b/flink-runtime/src/test/java/org/apache/flink/runtime/state/ttl/mock/MockStateBackend.java
@@ -35,8 +35,8 @@ import 
org.apache.flink.runtime.state.CheckpointStorageLocationReference;
 import