[GitHub] flink pull request #5463: [FLINK-8475][config][docs] Integrate YARN options

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5463#discussion_r167612224
  
--- Diff: 
flink-yarn/src/main/java/org/apache/flink/yarn/configuration/YarnConfigOptions.java
 ---
@@ -79,14 +87,18 @@
 */
public static final ConfigOption APPLICATION_ATTEMPTS =
key("yarn.application-attempts")
-   .noDefaultValue();
+   .noDefaultValue()
--- End diff --

I haven't seen any recommendation in the [Yarn 
docs](https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/yarn_setup.html).
 In the [HA 
docs](https://ci.apache.org/projects/flink/flink-docs-master/ops/jobmanager_high_availability.html#yarn-cluster-high-availability)
 we set it to 10 but never explicitly call this a recommended value.


---


[GitHub] flink pull request #5463: [FLINK-8475][config][docs] Integrate YARN options

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5463#discussion_r167611073
  
--- Diff: docs/ops/config.md ---
@@ -408,38 +408,7 @@ of the JobManager, because the same ActorSystem is 
used. Its not possible to use
 
 ### YARN
 
-- `containerized.heap-cutoff-ratio`: (Default 0.25) Percentage of heap 
space to remove from containers started by YARN. When a user requests a certain 
amount of memory for each TaskManager container (for example 4 GB), we can not 
pass this amount as the maximum heap space for the JVM (`-Xmx` argument) 
because the JVM is also allocating memory outside the heap. YARN is very strict 
with killing containers which are using more memory than requested. Therefore, 
we remove this fraction of the memory from the requested heap as a safety 
margin and add it to the memory used off-heap.
--- End diff --

correct.


---


[GitHub] flink issue #5464: [FLINK-8475][config][docs] Integrate Checkpointing option...

2018-02-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5464
  
merging.


---


[GitHub] flink pull request #5462: [FLINK-8475][config][docs] Integrate HA-ZK options

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5462#discussion_r167610029
  
--- Diff: docs/ops/config.md ---
@@ -502,27 +502,7 @@ Previously this key was named `recovery.mode` and the 
default value was `standal
 
  ZooKeeper-based HA Mode
 
-- `high-availability.zookeeper.quorum`: Defines the ZooKeeper quorum URL 
which is used to connect to the ZooKeeper cluster when the 'zookeeper' HA mode 
is selected. Previously this key was named `recovery.zookeeper.quorum`.
-
-- `high-availability.zookeeper.path.root`: (Default `/flink`) Defines the 
root dir under which the ZooKeeper HA mode will create namespace directories. 
Previously this ket was named `recovery.zookeeper.path.root`.
-
-- `high-availability.zookeeper.path.latch`: (Default `/leaderlatch`) 
Defines the znode of the leader latch which is used to elect the leader. 
Previously this key was named `recovery.zookeeper.path.latch`.
-
-- `high-availability.zookeeper.path.leader`: (Default `/leader`) Defines 
the znode of the leader which contains the URL to the leader and the current 
leader session ID. Previously this key was named 
`recovery.zookeeper.path.leader`.
-
-- `high-availability.storageDir`: Defines the directory in the state 
backend where the JobManager metadata will be stored (ZooKeeper only keeps 
pointers to it). Required for HA. Previously this key was named 
`recovery.zookeeper.storageDir` and `high-availability.zookeeper.storageDir`.
-
-- `high-availability.zookeeper.client.session-timeout`: (Default `6`) 
Defines the session timeout for the ZooKeeper session in ms. Previously this 
key was named `recovery.zookeeper.client.session-timeout`
-
-- `high-availability.zookeeper.client.connection-timeout`: (Default 
`15000`) Defines the connection timeout for ZooKeeper in ms. Previously this 
key was named `recovery.zookeeper.client.connection-timeout`.
-
-- `high-availability.zookeeper.client.retry-wait`: (Default `5000`) 
Defines the pause between consecutive retries in ms. Previously this key was 
named `recovery.zookeeper.client.retry-wait`.
-
-- `high-availability.zookeeper.client.max-retry-attempts`: (Default `3`) 
Defines the number of connection retries before the client gives up. Previously 
this key was named `recovery.zookeeper.client.max-retry-attempts`.
-
-- `high-availability.job.delay`: (Default `akka.ask.timeout`) Defines the 
delay before persisted jobs are recovered in case of a master recovery 
situation. Previously this key was named `recovery.job.delay`.
--- End diff --

ah, this is a general HA option and not specifically tied to ZK. It should 
be moved into the HA section instead of outright removing it.


---


[GitHub] flink pull request #5462: [FLINK-8475][config][docs] Integrate HA-ZK options

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5462#discussion_r167609071
  
--- Diff: docs/ops/config.md ---
@@ -502,27 +502,7 @@ Previously this key was named `recovery.mode` and the 
default value was `standal
 
  ZooKeeper-based HA Mode
 
-- `high-availability.zookeeper.quorum`: Defines the ZooKeeper quorum URL 
which is used to connect to the ZooKeeper cluster when the 'zookeeper' HA mode 
is selected. Previously this key was named `recovery.zookeeper.quorum`.
-
-- `high-availability.zookeeper.path.root`: (Default `/flink`) Defines the 
root dir under which the ZooKeeper HA mode will create namespace directories. 
Previously this ket was named `recovery.zookeeper.path.root`.
-
-- `high-availability.zookeeper.path.latch`: (Default `/leaderlatch`) 
Defines the znode of the leader latch which is used to elect the leader. 
Previously this key was named `recovery.zookeeper.path.latch`.
-
-- `high-availability.zookeeper.path.leader`: (Default `/leader`) Defines 
the znode of the leader which contains the URL to the leader and the current 
leader session ID. Previously this key was named 
`recovery.zookeeper.path.leader`.
-
-- `high-availability.storageDir`: Defines the directory in the state 
backend where the JobManager metadata will be stored (ZooKeeper only keeps 
pointers to it). Required for HA. Previously this key was named 
`recovery.zookeeper.storageDir` and `high-availability.zookeeper.storageDir`.
-
-- `high-availability.zookeeper.client.session-timeout`: (Default `6`) 
Defines the session timeout for the ZooKeeper session in ms. Previously this 
key was named `recovery.zookeeper.client.session-timeout`
-
-- `high-availability.zookeeper.client.connection-timeout`: (Default 
`15000`) Defines the connection timeout for ZooKeeper in ms. Previously this 
key was named `recovery.zookeeper.client.connection-timeout`.
-
-- `high-availability.zookeeper.client.retry-wait`: (Default `5000`) 
Defines the pause between consecutive retries in ms. Previously this key was 
named `recovery.zookeeper.client.retry-wait`.
-
-- `high-availability.zookeeper.client.max-retry-attempts`: (Default `3`) 
Defines the number of connection retries before the client gives up. Previously 
this key was named `recovery.zookeeper.client.max-retry-attempts`.
-
-- `high-availability.job.delay`: (Default `akka.ask.timeout`) Defines the 
delay before persisted jobs are recovered in case of a master recovery 
situation. Previously this key was named `recovery.job.delay`.
--- End diff --

ehhhcould be that the tables are outdated (ironic isn't it), let me 
check.


---


[GitHub] flink pull request #5462: [FLINK-8475][config][docs] Integrate HA-ZK options

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5462#discussion_r167608682
  
--- Diff: docs/ops/config.md ---
@@ -502,27 +502,7 @@ Previously this key was named `recovery.mode` and the 
default value was `standal
 
  ZooKeeper-based HA Mode
 
-- `high-availability.zookeeper.quorum`: Defines the ZooKeeper quorum URL 
which is used to connect to the ZooKeeper cluster when the 'zookeeper' HA mode 
is selected. Previously this key was named `recovery.zookeeper.quorum`.
-
-- `high-availability.zookeeper.path.root`: (Default `/flink`) Defines the 
root dir under which the ZooKeeper HA mode will create namespace directories. 
Previously this ket was named `recovery.zookeeper.path.root`.
-
-- `high-availability.zookeeper.path.latch`: (Default `/leaderlatch`) 
Defines the znode of the leader latch which is used to elect the leader. 
Previously this key was named `recovery.zookeeper.path.latch`.
-
-- `high-availability.zookeeper.path.leader`: (Default `/leader`) Defines 
the znode of the leader which contains the URL to the leader and the current 
leader session ID. Previously this key was named 
`recovery.zookeeper.path.leader`.
-
-- `high-availability.storageDir`: Defines the directory in the state 
backend where the JobManager metadata will be stored (ZooKeeper only keeps 
pointers to it). Required for HA. Previously this key was named 
`recovery.zookeeper.storageDir` and `high-availability.zookeeper.storageDir`.
--- End diff --

subsumed by `"high-availability.storageDir"`


---


[GitHub] flink pull request #5459: [FLINK-8475][config][docs] Integrate FS options

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5459#discussion_r167608020
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/CoreOptions.java ---
@@ -127,7 +130,30 @@
 */
public static final ConfigOption DEFAULT_FILESYSTEM_SCHEME = 
ConfigOptions
.key("fs.default-scheme")
-   .noDefaultValue();
+   .noDefaultValue()
+   .withDescription("The default filesystem scheme, used 
for paths that do not declare a scheme explicitly.");
--- End diff --

ah I remember now, looking at the description:

[Part1]
The default filesystem scheme to be used

[Part2]
, with the necessary authority to contact, e.g. the host:port of the 
NameNode in the case of HDFS (if needed).

[Part3]
By default, this is set to file:/// which points to the local filesystem. 
This means that the local filesystem is going to be used to search for 
user-specified files without an explicit scheme definition.

[Part4]
This scheme is used ONLY if no other scheme is specified (explicitly) in 
the user-provided URI.

Part 1 and 4 are contained in the description, part 3 was left out since 
file:/// isn't the documented default. Only part 2 is really missing.


---


[GitHub] flink pull request #5459: [FLINK-8475][config][docs] Integrate FS options

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5459#discussion_r167606303
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/CoreOptions.java ---
@@ -127,7 +130,30 @@
 */
public static final ConfigOption DEFAULT_FILESYSTEM_SCHEME = 
ConfigOptions
.key("fs.default-scheme")
-   .noDefaultValue();
+   .noDefaultValue()
+   .withDescription("The default filesystem scheme, used 
for paths that do not declare a scheme explicitly.");
--- End diff --

IMO the missing text isn't really needed, but I'll add it back to keep this 
PR as a straight port of the existing docs and merge it afterwards.


---


[GitHub] flink issue #5392: [FLINK-8475][config][docs] Integrate JM options

2018-02-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5392
  
merging.


---


[GitHub] flink issue #5391: [FLINK-8475][config][docs] Integrate BlobServer options

2018-02-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5391
  
yes, this wasn't in there at all :) merging.


---


[GitHub] flink issue #5390: [FLINK-8475][config][docs] Integrate SlotManager options

2018-02-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5390
  
merging.


---


[GitHub] flink issue #5389: [FLINK-8475][config][docs] Integrate REST options

2018-02-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5389
  
merging.


---


[GitHub] flink issue #5387: [FLINK-8475][config][docs] Integrate optimizer options

2018-02-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5387
  
Oh i just forgot the annotation, will add `@PublicEvolving` while merging...


---


[GitHub] flink issue #5461: [FLINK-8475][config][docs] Integrate Mesos options

2018-02-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5461
  
yes, it will show up in the HA table. merging.


---


[GitHub] flink issue #5460: [FLINK-8475][config][docs] Integrate Algorithm options

2018-02-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5460
  
merging.


---


[GitHub] flink issue #5392: [FLINK-8475][config][docs] Integrate JM options

2018-02-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5392
  
plenty of missing options are being added by these PRs 😓 


---


[GitHub] flink pull request #5392: [FLINK-8475][config][docs] Integrate JM options

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5392#discussion_r167596687
  
--- Diff: docs/ops/config.md ---
@@ -322,7 +321,6 @@ The following parameters configure Flink's JobManager 
and TaskManagers.
 
 - `taskmanager.exit-on-fatal-akka-error`: Whether the TaskManager shall be 
terminated in case of a fatal Akka error (quarantining event). (DEFAULT: 
**false**)
 
-- `jobmanager.tdd.offload.minsize`: Maximum size of the 
`TaskDeploymentDescriptor`'s serialized task and job information to still 
transmit them via RPC. Larger blobs may be offloaded to the BLOB server. 
(DEFAULT: **1 KiB**).
--- End diff --

double-checked and yes, I couldn't find a reference anywhere.


---


[GitHub] flink issue #5461: [FLINK-8475][config][docs] Integrate Mesos options

2018-02-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5461
  
Currently we assign each `ConfigOption` to exactly one table, so there's 
little we can do at the moment about this option.


---


[GitHub] flink issue #5415: [FLINK-3655] [core] Support multiple paths in FileInputFo...

2018-02-12 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5415
  
What would speak against creating a new FileInputFormat that supports 
multiple paths instead? Common code could be moved into a shared super class (I 
_think_ that would be allowed).


---


[GitHub] flink pull request #5464: [FLINK-8475][config][docs] Integrate Checkpointing...

2018-02-12 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5464

[FLINK-8475][config][docs] Integrate Checkpointing options

## What is the purpose of the change

This PR integrates the Checkpointing  `ConfigOptions` into the 
configuration docs generator.

## Brief change log

* Add missing descriptions to config options (derived from existing 
description/javadocs)
* integrate Checkpointing configuration table into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_cp

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5464.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5464


commit ea8f1ae8b36d793fec7cfced887bae38650c0ba6
Author: zentol <chesnay@...>
Date:   2018-01-23T13:57:20Z

[FLINK-8475][config][docs] Integrate Checkpointing options




---


[GitHub] flink pull request #5463: [FLINK-8475][config][docs] Integrate YARN options

2018-02-12 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5463

[FLINK-8475][config][docs] Integrate YARN options

## What is the purpose of the change

This PR integrates the YARN `ConfigOptions` into the configuration docs 
generator.

## Brief change log

* Add missing descriptions to config options (derived from existing 
description/javadocs)
* integrate YARN  configuration table into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_yarn

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5463.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5463


commit 7199209e6f6ed68589ab841dbbc781802e608e55
Author: zentol <chesnay@...>
Date:   2018-01-23T13:04:36Z

[FLINK-8475][config][docs] Integrate YARN options




---


[GitHub] flink pull request #5462: [FLINK-8475][config][docs] Integrate HA-ZK options

2018-02-12 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5462

[FLINK-8475][config][docs] Integrate HA-ZK options

## What is the purpose of the change

This PR integrates the Zookeeper HA `ConfigOptions` into the configuration 
docs generator.

## Brief change log

* Add `ConfigGroup` for zookeeper HA config options
* integrate zookeeper HA configuration table into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_ha_zk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5462.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5462


commit 14e168c1c9c526a5feb0cac5738ea9a3663b0466
Author: zentol <chesnay@...>
Date:   2018-01-23T12:50:32Z

[FLINK-8475][config][docs] Integrate HA-ZK options




---


[GitHub] flink pull request #5461: [FLINK-8475][config][docs] Integrate Mesos options

2018-02-12 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5461

[FLINK-8475][config][docs] Integrate Mesos options

## What is the purpose of the change

This PR integrates the Mesos `ConfigOptions` into the configuration docs 
generator.

## Brief change log

* extend generator configuration to pick up 
`MesosOptions`/`MesosTaskManagerParameters` classes in flink-mesos
* update generator file matching to accept `MesosTaskManagerParameters`
* Add missing descriptions to config options (derived from existing 
description/javadocs)
* integrate mesos configuration tables into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_mesos

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5461.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5461


commit fe3a15693e355e1fa1facb6adb21061767c680d3
Author: zentol <chesnay@...>
Date:   2018-01-23T13:20:12Z

[FLINK-8475][config][docs] Integrate Mesos options




---


[GitHub] flink pull request #5442: [FLINK-7713][flip6] Implement JarUploadHandler

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5442#discussion_r167542881
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/ng/JarUploadHandler.java
 ---
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.webmonitor.handlers.ng;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.rest.handler.AbstractRestHandler;
+import org.apache.flink.runtime.rest.handler.HandlerRequest;
+import org.apache.flink.runtime.rest.handler.RestHandlerException;
+import org.apache.flink.runtime.rest.messages.EmptyMessageParameters;
+import org.apache.flink.runtime.rest.messages.FileUpload;
+import org.apache.flink.runtime.rest.messages.MessageHeaders;
+import org.apache.flink.runtime.webmonitor.RestfulGateway;
+import org.apache.flink.runtime.webmonitor.retriever.GatewayRetriever;
+
+import 
org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpResponseStatus;
+
+import javax.annotation.Nonnull;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.CompletionException;
+import java.util.concurrent.Executor;
+
+import static java.util.Objects.requireNonNull;
+
+/**
+ * Handles .jar file uploads.
+ */
+public class JarUploadHandler extends
+   AbstractRestHandler<RestfulGateway, FileUpload, 
JarUploadResponseBody, EmptyMessageParameters> {
+
+   private final Path jarDir;
+
+   private final Executor executor;
+
+   public JarUploadHandler(
+   final CompletableFuture localRestAddress,
+   final GatewayRetriever 
leaderRetriever,
+   final Time timeout,
+   final Map<String, String> responseHeaders,
+   final MessageHeaders<FileUpload, JarUploadResponseBody, 
EmptyMessageParameters> messageHeaders,
+   final Path jarDir,
+   final Executor executor) {
+   super(localRestAddress, leaderRetriever, timeout, 
responseHeaders, messageHeaders);
+   this.jarDir = requireNonNull(jarDir);
+   this.executor = requireNonNull(executor);
+   }
+
+   @Override
+   protected CompletableFuture handleRequest(
+   @Nonnull final HandlerRequest<FileUpload, 
EmptyMessageParameters> request,
+   @Nonnull final RestfulGateway gateway) throws 
RestHandlerException {
+
+   final FileUpload fileUpload = request.getRequestBody();
+   return CompletableFuture.supplyAsync(() -> {
+   if 
(!fileUpload.getPath().getFileName().toString().endsWith(".jar")) {
+   deleteUploadedFile(fileUpload);
+   throw new CompletionException(new 
RestHandlerException(
+   "Only Jar files are allowed.",
+   HttpResponseStatus.BAD_REQUEST));
+   } else {
+   try {
+   Files.move(fileUpload.getPath(), 
jarDir.resolve(fileUpload.getPath().getFileName()));
--- End diff --

please guard the `jarDir` access as done in 
8fdea6093a55c33732ae869b82552371b8142c2a. I suppose you'll have to create new 
utility methods outside the  `WebRuntimeMonitor`.


---


[GitHub] flink pull request #5455: [FLINK-7711][flip6] Port JarListHandler

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5455#discussion_r167542075
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/ng/JarListHandler.java
 ---
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.webmonitor.handlers.ng;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.client.program.PackagedProgram;
+import org.apache.flink.runtime.concurrent.FutureUtils;
+import org.apache.flink.runtime.rest.handler.AbstractRestHandler;
+import org.apache.flink.runtime.rest.handler.HandlerRequest;
+import org.apache.flink.runtime.rest.handler.RestHandlerException;
+import org.apache.flink.runtime.rest.messages.EmptyMessageParameters;
+import org.apache.flink.runtime.rest.messages.EmptyRequestBody;
+import org.apache.flink.runtime.rest.messages.MessageHeaders;
+import org.apache.flink.runtime.webmonitor.RestfulGateway;
+import org.apache.flink.runtime.webmonitor.retriever.GatewayRetriever;
+import org.apache.flink.util.FlinkException;
+
+import javax.annotation.Nonnull;
+
+import java.io.File;
+import java.io.FilenameFilter;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.CompletionException;
+import java.util.concurrent.Executor;
+import java.util.jar.JarFile;
+import java.util.jar.Manifest;
+
+import static java.util.Objects.requireNonNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Handle request for listing uploaded jars.
+ */
+public class JarListHandler extends AbstractRestHandler<RestfulGateway, 
EmptyRequestBody, JarListInfo, EmptyMessageParameters> {
+
+   private final File jarDir;
+
+   private final Executor executor;
+
+   public JarListHandler(
+   CompletableFuture localRestAddress,
+   GatewayRetriever 
leaderRetriever,
+   Time timeout,
+   Map<String, String> responseHeaders,
+   MessageHeaders<EmptyRequestBody, JarListInfo, 
EmptyMessageParameters> messageHeaders,
+   File jarDir,
+   Executor executor) {
+   super(localRestAddress, leaderRetriever, timeout, 
responseHeaders, messageHeaders);
+
+   this.jarDir = requireNonNull(jarDir);
+   this.executor = requireNonNull(executor);
+   }
+
+   @Override
+   protected CompletableFuture handleRequest(@Nonnull 
HandlerRequest<EmptyRequestBody, EmptyMessageParameters> request, @Nonnull 
RestfulGateway gateway) throws RestHandlerException {
+   final String localAddress;
+   checkState(localAddressFuture.isDone());
+
+   try {
+   localAddress = localAddressFuture.get();
+   } catch (Exception e) {
+   return FutureUtils.completedExceptionally(e);
+   }
+
+   return CompletableFuture.supplyAsync(() -> {
+   try {
+   List jarFileList = new 
ArrayList<>();
+   File[] list = jarDir.listFiles(new 
FilenameFilter() {
--- End diff --

please guard the `jarDir` access as done in 
8fdea6093a55c33732ae869b82552371b8142c2a. I suppose you'll have to create new 
utility methods outside the  `WebRuntimeMonitor`.


---


[GitHub] flink pull request #5455: [FLINK-7711][flip6] Port JarListHandler

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5455#discussion_r167541843
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/ng/JarUploadHandler.java
 ---
@@ -0,0 +1,103 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.webmonitor.handlers.ng;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.runtime.rest.handler.AbstractRestHandler;
+import org.apache.flink.runtime.rest.handler.HandlerRequest;
+import org.apache.flink.runtime.rest.handler.RestHandlerException;
+import org.apache.flink.runtime.rest.messages.EmptyMessageParameters;
+import org.apache.flink.runtime.rest.messages.FileUpload;
+import org.apache.flink.runtime.rest.messages.MessageHeaders;
+import org.apache.flink.runtime.webmonitor.RestfulGateway;
+import org.apache.flink.runtime.webmonitor.retriever.GatewayRetriever;
+
+import 
org.apache.flink.shaded.netty4.io.netty.handler.codec.http.HttpResponseStatus;
+
+import javax.annotation.Nonnull;
+
+import java.io.IOException;
+import java.nio.file.Files;
+import java.nio.file.Path;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.CompletionException;
+import java.util.concurrent.Executor;
+
+import static java.util.Objects.requireNonNull;
+
+/**
+ * Handles .jar file uploads.
+ */
+public class JarUploadHandler extends
+   AbstractRestHandler<RestfulGateway, FileUpload, 
JarUploadResponseBody, EmptyMessageParameters> {
+
+   private final Path jarDir;
+
+   private final Executor executor;
+
+   public JarUploadHandler(
+   final CompletableFuture localRestAddress,
+   final GatewayRetriever 
leaderRetriever,
+   final Time timeout,
+   final Map<String, String> responseHeaders,
+   final MessageHeaders<FileUpload, JarUploadResponseBody, 
EmptyMessageParameters> messageHeaders,
+   final Path jarDir,
+   final Executor executor) {
+   super(localRestAddress, leaderRetriever, timeout, 
responseHeaders, messageHeaders);
+   this.jarDir = requireNonNull(jarDir);
+   this.executor = requireNonNull(executor);
+   }
+
+   @Override
+   protected CompletableFuture handleRequest(
+   @Nonnull final HandlerRequest<FileUpload, 
EmptyMessageParameters> request,
+   @Nonnull final RestfulGateway gateway) throws 
RestHandlerException {
+
+   final FileUpload fileUpload = request.getRequestBody();
+   return CompletableFuture.supplyAsync(() -> {
+   if 
(!fileUpload.getPath().getFileName().toString().endsWith(".jar")) {
+   deleteUploadedFile(fileUpload);
+   throw new CompletionException(new 
RestHandlerException(
+   "Only Jar files are allowed.",
+   HttpResponseStatus.BAD_REQUEST));
+   } else {
+   try {
+   Files.move(fileUpload.getPath(), 
jarDir.resolve(fileUpload.getPath().getFileName()));
--- End diff --

please guard the `jarDir` access as done in 
8fdea6093a55c33732ae869b82552371b8142c2a.


---


[GitHub] flink pull request #5455: [FLINK-7711][flip6] Port JarListHandler

2018-02-12 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5455#discussion_r167540798
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/handlers/ng/JarListHandler.java
 ---
@@ -0,0 +1,156 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.webmonitor.handlers.ng;
+
+import org.apache.flink.api.common.time.Time;
+import org.apache.flink.client.program.PackagedProgram;
+import org.apache.flink.runtime.concurrent.FutureUtils;
+import org.apache.flink.runtime.rest.handler.AbstractRestHandler;
+import org.apache.flink.runtime.rest.handler.HandlerRequest;
+import org.apache.flink.runtime.rest.handler.RestHandlerException;
+import org.apache.flink.runtime.rest.messages.EmptyMessageParameters;
+import org.apache.flink.runtime.rest.messages.EmptyRequestBody;
+import org.apache.flink.runtime.rest.messages.MessageHeaders;
+import org.apache.flink.runtime.webmonitor.RestfulGateway;
+import org.apache.flink.runtime.webmonitor.retriever.GatewayRetriever;
+import org.apache.flink.util.FlinkException;
+
+import javax.annotation.Nonnull;
+
+import java.io.File;
+import java.io.FilenameFilter;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.List;
+import java.util.Map;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.CompletionException;
+import java.util.concurrent.Executor;
+import java.util.jar.JarFile;
+import java.util.jar.Manifest;
+
+import static java.util.Objects.requireNonNull;
+import static org.apache.flink.util.Preconditions.checkState;
+
+/**
+ * Handle request for listing uploaded jars.
+ */
+public class JarListHandler extends AbstractRestHandler<RestfulGateway, 
EmptyRequestBody, JarListInfo, EmptyMessageParameters> {
+
+   private final File jarDir;
+
+   private final Executor executor;
+
+   public JarListHandler(
+   CompletableFuture localRestAddress,
+   GatewayRetriever 
leaderRetriever,
+   Time timeout,
+   Map<String, String> responseHeaders,
+   MessageHeaders<EmptyRequestBody, JarListInfo, 
EmptyMessageParameters> messageHeaders,
+   File jarDir,
+   Executor executor) {
+   super(localRestAddress, leaderRetriever, timeout, 
responseHeaders, messageHeaders);
+
+   this.jarDir = requireNonNull(jarDir);
+   this.executor = requireNonNull(executor);
+   }
+
+   @Override
+   protected CompletableFuture handleRequest(@Nonnull 
HandlerRequest<EmptyRequestBody, EmptyMessageParameters> request, @Nonnull 
RestfulGateway gateway) throws RestHandlerException {
+   final String localAddress;
+   checkState(localAddressFuture.isDone());
+
+   try {
+   localAddress = localAddressFuture.get();
+   } catch (Exception e) {
+   return FutureUtils.completedExceptionally(e);
+   }
+
+   return CompletableFuture.supplyAsync(() -> {
+   try {
+   List jarFileList = new 
ArrayList<>();
+   File[] list = jarDir.listFiles(new 
FilenameFilter() {
+   @Override
+   public boolean accept(File dir, String 
name) {
+   return name.endsWith(".jar");
+   }
+   });
+   // last modified ascending order
+   Arrays.sort(list, (f1, f2) -> 
Long.compare(f2.lastModified(), f1.lastModified()));
+
+   for (File f : list) {

[GitHub] flink pull request #5460: [FLINK-8475][config][docs] Integrate Algorithm opt...

2018-02-12 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5460

[FLINK-8475][config][docs] Integrate Algorithm options

## What is the purpose of the change

This PR ports the runtime algorithm ConfigConstants to `ConfigOptions` and 
integrates them into the configuration docs generator.

## Brief change log

* runtime algorithm config constants to config options
* integrate runtime algorithm configuration table into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_algorithm

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5460.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5460


commit 4d991f39823898b4385007a3ef445480f196efae
Author: zentol <chesnay@...>
Date:   2018-01-30T13:06:30Z

[FLINK-8475][config][docs] Integrate Algorithm options




---


[GitHub] flink pull request #5459: [FLINK-8475][config][docs] Integrate FS options

2018-02-12 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5459

[FLINK-8475][config][docs] Integrate FS options

## What is the purpose of the change

This PR ports the fileystem ConfigConstants to `ConfigOptions` and 
integrates them into the configuration docs generator.

## Brief change log

* port filesystem config constants to config options
* Add missing descriptions to config options (derived from existing 
description/javadocs)
* integrate filesystem configuration table into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_fs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5459.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5459


commit 6238b4184295ab31933b4cc62a1f30ac11c0f09f
Author: zentol <chesnay@...>
Date:   2018-01-22T15:16:02Z

[FLINK-8475][config][docs] Integrate FS options




---


[GitHub] flink pull request #5443: [FLINK-8626] Introduce BackPressureStatsTracker in...

2018-02-10 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5443#discussion_r167413109
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitor.java
 ---
@@ -143,7 +143,7 @@
 
private final StackTraceSampleCoordinator stackTraceSamples;
 
-   private final BackPressureStatsTracker backPressureStatsTracker;
+   private final BackPressureStatsTrackerImpl backPressureStatsTrackerImpl;
--- End diff --

Does this only apply to the legacy handler? (It appears the 
JobManagerServices also exposes the implementation instead of the interface.)


---


[GitHub] flink pull request #5443: [FLINK-8626] Introduce BackPressureStatsTracker in...

2018-02-10 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5443#discussion_r167412925
  
--- Diff: 
flink-runtime/src/test/java/org/apache/flink/runtime/rest/handler/legacy/JobVertexBackPressureHandlerTest.java
 ---
@@ -57,7 +57,7 @@ public void testGetPaths() {
@Test
public void testResponseNoStatsAvailable() throws Exception {
ExecutionJobVertex jobVertex = mock(ExecutionJobVertex.class);
-   BackPressureStatsTracker statsTracker = 
mock(BackPressureStatsTracker.class);
+   BackPressureStatsTrackerImpl statsTracker = 
mock(BackPressureStatsTrackerImpl.class);
--- End diff --

we could replace the mocks in this class with VoidBackPressureStatsTracker 
/ lambda implementations.


---


[GitHub] flink pull request #5447: [FLINK-8423] OperatorChain#pushToOperator catch bl...

2018-02-10 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5447#discussion_r167395295
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/OperatorChain.java
 ---
@@ -591,16 +591,18 @@ public void collect(StreamRecord record) {
operator.setKeyContextElement1(copy);
operator.processElement(copy);
} catch (ClassCastException e) {
-   // Enrich error message
-   ClassCastException replace = new 
ClassCastException(
-   String.format(
-   "%s. Failed to push OutputTag 
with id '%s' to operator. " +
-   "This can occur when multiple 
OutputTags with different types " +
-   "but identical names are being 
used.",
-   e.getMessage(),
-   outputTag.getId()));
-
-   throw new 
ExceptionInChainedOperatorException(replace);
+   if (outputTag != null) {
--- End diff --

Neither. just add an else block that re-throws the original exception.


---


[GitHub] flink pull request #5447: [FLINK-8423] OperatorChain#pushToOperator catch bl...

2018-02-10 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5447#discussion_r167392891
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/OperatorChain.java
 ---
@@ -591,16 +591,18 @@ public void collect(StreamRecord record) {
operator.setKeyContextElement1(copy);
operator.processElement(copy);
} catch (ClassCastException e) {
-   // Enrich error message
-   ClassCastException replace = new 
ClassCastException(
-   String.format(
-   "%s. Failed to push OutputTag 
with id '%s' to operator. " +
-   "This can occur when multiple 
OutputTags with different types " +
-   "but identical names are being 
used.",
-   e.getMessage(),
-   outputTag.getId()));
-
-   throw new 
ExceptionInChainedOperatorException(replace);
+   if (outputTag != null) {
--- End diff --

you are now completely swallowing the exception if the outputtag is null 
which is unacceptable.


---


[GitHub] flink pull request #5443: [FLINK-8626] Introduce BackPressureStatsTracker in...

2018-02-09 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5443#discussion_r167267838
  
--- Diff: 
flink-runtime-web/src/main/java/org/apache/flink/runtime/webmonitor/WebRuntimeMonitor.java
 ---
@@ -143,7 +143,7 @@
 
private final StackTraceSampleCoordinator stackTraceSamples;
 
-   private final BackPressureStatsTracker backPressureStatsTracker;
+   private final BackPressureStatsTrackerImpl backPressureStatsTrackerImpl;
--- End diff --

this should be typed to the interface, and the variable name should not end 
in "Impl".


---


[GitHub] flink pull request #5443: [FLINK-8626] Introduce BackPressureStatsTracker in...

2018-02-09 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5443#discussion_r167268048
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/rest/handler/legacy/backpressure/VoidBackPressureStatsTracker.java
 ---
@@ -0,0 +1,36 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.runtime.rest.handler.legacy.backpressure;
+
+import org.apache.flink.runtime.executiongraph.ExecutionJobVertex;
+
+import java.util.Optional;
+
+/**
+ * {@link BackPressureStatsTracker} implementation which returns always no 
back pressure statistics.
--- End diff --

switch "returns" and "always"?


---


[GitHub] flink issue #5426: [FLINK-8362] [elasticsearch] Shade all ES connector depen...

2018-02-08 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5426
  
notice files look good to me.


---


[GitHub] flink pull request #5426: [FLINK-8362] [elasticsearch] Shade all ES connecto...

2018-02-08 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5426#discussion_r166906856
  
--- Diff: 
flink-connectors/flink-connector-elasticsearch2/src/main/resources/META-INF/NOTICE
 ---
@@ -0,0 +1,86 @@
+This project includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+-
+
+This project bundles the following dependencies under
+the Apache Software License 2.0
--- End diff --

yaml, tartarus missing?


---


[GitHub] flink pull request #5426: [FLINK-8362] [elasticsearch] Shade all ES connecto...

2018-02-08 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5426#discussion_r166904848
  
--- Diff: 
flink-connectors/flink-connector-elasticsearch/src/main/resources/META-INF/NOTICE
 ---
@@ -0,0 +1,57 @@
+This project includes software developed at
+The Apache Software Foundation (http://www.apache.org/).
+
+-
+
+This project bundles the following dependencies under
+the Apache Software License 2.0
+
+  - org.apache.lucene : lucene-core version 4.10.4
--- End diff --

isn't this list missing some dependencies? like jackson, joda-time, yaml, 
tartarus.


---


[GitHub] flink issue #5357: [hotfix][JobGraph] Eliminate the conditions of parallelis...

2018-02-07 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5357
  
@maqingxiang You are correct that the parallelism check is redundant, but 
that doesn't automatically mean that it should be removed.

This change imo hurts readability as the basic parallelism condition is no 
longer explicit, but implicitly covered by the partitioner used.


---


[GitHub] flink pull request #5418: [FLINK-8553] switch flink-metrics-datadog to async...

2018-02-07 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5418#discussion_r166564479
  
--- Diff: 
flink-metrics/flink-metrics-datadog/src/main/java/org/apache/flink/metrics/datadog/DatadogHttpClient.java
 ---
@@ -94,4 +96,22 @@ public void close() {
client.dispatcher().executorService().shutdown();
client.connectionPool().evictAll();
}
+
+   private static class EmptyCallback implements Callback {
+   private static final EmptyCallback singleton = new 
EmptyCallback();
+
+   public static Callback getEmptyCallback() {
+   return singleton;
+   }
+
+   @Override
+   public void onFailure(Call call, IOException e) {
+   // Do nothing
--- End diff --

Let's log the exception at least as DEBUG.


---


[GitHub] flink pull request #5420: [FLINK-8576][QS] Reduce verbosity when classes can...

2018-02-07 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5420

[FLINK-8576][QS] Reduce verbosity when classes can't be found

This PR reduces the verbosity of DEBUG logging messages when the 
flink-queryable-state-runtime jar is not on the classpath. Instead of the full 
stacktrace we now only include the exception message.

I've also modified the message to explicitly mention that this jar is only 
needed if queryable state is to be used. The previous message made it sound as 
if this was a critical issue that has to be fixed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8576

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5420.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5420


commit 0bdb29328324cd8c30418c99ea5e0dd66938487b
Author: zentol <chesnay@...>
Date:   2018-02-07T09:31:41Z

[FLINK-8576][QS] Reduce verbosity when classes can't be found




---


[GitHub] flink pull request #5419: [FLINK-8574][travis] Add timestamp to logging mess...

2018-02-07 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5419

[FLINK-8574][travis] Add timestamp to logging messages

With this PR logging statements on travis also include a timestamp( e.g. 
`09:00:27.972`). This allows us to better judge how long each part of build 
takes, in particular plugins.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8574

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5419.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5419


commit 4245b3069add4139562fadbd3c096f031f112dd2
Author: zentol <chesnay@...>
Date:   2018-02-07T08:52:23Z

[FLINK-8574][travis] Add timestamp to logging messages




---


[GitHub] flink pull request #5417: [FLINK-8565][tests] Ensure locationBytes.length > ...

2018-02-06 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5417

[FLINK-8565][tests] Ensure locationBytes.length > 0 in CheckpointOptionsTest

This PR fixes a test instability in `CheckpointOptionsTest#testSavepoint`. 
The tests generated a byte array of a random size, which may also be 0. This 
caused an `IllegalArgumentException` when being passed to the 
`CheckpointStorageLocationReference` constructor.

We now add 1 to the randomly picked size, and compensate this addition by 
reducing the upper bound by 1.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8565

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5417.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5417


commit 6e22d496f0a92c93660f20e137e59cd30087e199
Author: zentol <chesnay@...>
Date:   2018-02-06T13:07:23Z

[FLINK-8565][tests] Ensure locationBytes.length > 0




---


[GitHub] flink issue #4809: [FLINK-7803][Documentation] Add missing savepoint informa...

2018-02-06 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/4809
  
will merge this while addressing the comments


---


[GitHub] flink pull request #5413: [hotfix][table][tests] Set @Ignore description for...

2018-02-05 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5413

[hotfix][table][tests] Set @Ignore description for RowCsvInputFormatT…

Trivial change that moves the reasoning for `@Ignore` from a comment into 
the annotation itself.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink hotfix_ignore

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5413.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5413


commit 9a189ee22769ca34ee71d3145ee48d5cecd5271c
Author: zentol <chesnay@...>
Date:   2018-02-05T15:01:48Z

[hotfix][table][tests] Set @Ignore description for 
RowCsvInputFormatTest#testParserCorrectness




---


[GitHub] flink issue #5161: [FLINK-7608][metric] Refactor latency statistics metric

2018-02-05 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5161
  
merging.


---


[GitHub] flink issue #5412: [FLINK-8559][RocksDB] Release resources if snapshot opera...

2018-02-05 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5412
  
merging.


---


[GitHub] flink issue #5072: [FLINK-7984][build] Bump snappy-java to 1.1.4

2018-02-05 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5072
  
@yew1eb Could you close the PR? The issue was addressed in 
f1e4d25c11a678688064492d50ffad38c39ea877.


---


[GitHub] flink pull request #5412: [FLINK-8559][RocksDB] Release resources if snapsho...

2018-02-05 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5412

[FLINK-8559][RocksDB] Release resources if snapshot operation fails

## What is the purpose of the change

This PR ensures that RocksDB resources are released if 
`RocksDBIncrementalSnapshotOperation#takeSnapshot` throws an Exception.

We now catch the exception, cancel the SnapshotOperation, and re-throw the 
original exception.

## Verifying this change

I've verified this manually by running 
`JobManagerHACheckpointRecoveryITCase` on Windows where `takeSnapshot` fails 
due to FLINK-8557.

I couldn't come up with proper test. The method hardly does anything in the 
first place and every solution i could think of would depend a lot on 
implementation details (like mocking `Checkpoint.create()` to throw an 
exception).

## Does this pull request potentially affect one of the following parts:

  - Dependencies (does it add or upgrade a dependency): (no)
  - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
  - The serializers: (no)
  - The runtime per-record code paths (performance sensitive): (no)
  - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
  - The S3 file system connector: (no)

## Documentation

  - Does this pull request introduce a new feature? (no)
  - If yes, how is the feature documented? (not applicable)

@StefanRRichter 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8559

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5412.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5412


commit 05f0ff5e353117894af4ba7dc096c3256d80450b
Author: zentol <chesnay@...>
Date:   2018-02-05T12:15:29Z

[FLINK-8559][RocksDB] Release resources if snapshot operation fails




---


[GitHub] flink issue #5404: [FLINK-8550][table] Iterate over entryset instead of keys

2018-02-05 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5404
  
merging.


---


[GitHub] flink issue #5408: [hotfix][docs] Fix typos in windows document

2018-02-05 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5408
  
merging,


---


[GitHub] flink issue #5407: [hotfix][build] Fix duplicate maven enforce plugin declar...

2018-02-05 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5407
  
merging.


---


[GitHub] flink pull request #5406: [hotfix] Fix typos in comments.

2018-02-05 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5406#discussion_r165916481
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolGateway.java
 ---
@@ -108,7 +108,7 @@
 * individually accepted or rejected by returning the collection of 
accepted
 * slot offers.
 *
-* @param taskManagerLocation from which the slot offers originate
+* @param taskManagerLocation from which the slot offer originates
--- End diff --

this method on the other hand offers multiple slots, so plural makes more 
sense.


---


[GitHub] flink issue #5394: [FLINK-6571][tests] Catch InterruptedException in StreamS...

2018-02-05 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5394
  
How about calling `Thread.currentThread().interrupt();` only after having 
left the loop?
```
public void run(SourceContext ctx) throws Exception {
boolean setInterruptFlag = false;
while (running) {
try {
Thread.sleep(20);
} catch (InterruptedException ignored) {
setInterruptFlag = true;
}
}
if (setInterruptFlag) {
Thread.currentThread().interrupt();
}
}
```

This should behave like the original proposal, without the hot loop.


---


[GitHub] flink pull request #5406: [hotfix] Fix typos in comments.

2018-02-02 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5406#discussion_r165807233
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolGateway.java
 ---
@@ -90,11 +90,11 @@
CompletableFuture releaseTaskManager(final ResourceID 
resourceId);
 
/**
-* Offers a slot to the {@link SlotPool}. The slot offer can be 
accepted or
+* Offers a slot to the {@link SlotPool}. The slot offers can be 
accepted or
 * rejected.
 *
-* @param taskManagerLocation from which the slot offer originates
-* @param taskManagerGateway to talk to the slot offerer
+* @param taskManagerLocation from which the slot offers originate
+* @param taskManagerGateway to talk to the slot offers
--- End diff --

the previous version was correct


---


[GitHub] flink pull request #5406: [hotfix] Fix typos in comments.

2018-02-02 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5406#discussion_r165807228
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolGateway.java
 ---
@@ -109,7 +109,7 @@
 * slot offers.
 *
 * @param taskManagerLocation from which the slot offers originate
-* @param taskManagerGateway to talk to the slot offerer
+* @param taskManagerGateway to talk to the slot offers
--- End diff --

the previous version was correct


---


[GitHub] flink pull request #5406: [hotfix] Fix typos in comments.

2018-02-02 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5406#discussion_r165807220
  
--- Diff: 
flink-runtime/src/main/java/org/apache/flink/runtime/jobmaster/slotpool/SlotPoolGateway.java
 ---
@@ -90,11 +90,11 @@
CompletableFuture releaseTaskManager(final ResourceID 
resourceId);
 
/**
-* Offers a slot to the {@link SlotPool}. The slot offer can be 
accepted or
+* Offers a slot to the {@link SlotPool}. The slot offers can be 
accepted or
 * rejected.
 *
-* @param taskManagerLocation from which the slot offer originates
-* @param taskManagerGateway to talk to the slot offerer
+* @param taskManagerLocation from which the slot offers originate
--- End diff --

given that this method offers a single slot using singular makes more sense


---


[GitHub] flink issue #5155: [FLINK-4812][metrics] Expose currentLowWatermark for all ...

2018-02-02 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5155
  
merging.


---


[GitHub] flink pull request #5402: [FLINK-8549] [config] Move TimerServiceOptions int...

2018-02-02 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5402#discussion_r165623427
  
--- Diff: 
flink-core/src/main/java/org/apache/flink/configuration/TaskManagerOptions.java 
---
@@ -206,6 +206,14 @@
key("task.cancellation.timeout")
.defaultValue(18L);
 
+   /**
+* This configures how long we wait for the timers to finish all 
pending timer threads
+* when the stream task is cancelled .
+*/
+   public static final ConfigOption TASK_CANCELLATION_TIMEOUT_TIMERS 
= ConfigOptions
+   .key("task.cancellation.timeout.timers")
+   .defaultValue(7500L);
--- End diff --

add deprecated key? (I'm not quite sure whether the previous option was 
part of a release)


---


[GitHub] flink issue #5399: [hotfix] Use LOG.error() when logging failure state chang...

2018-02-01 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5399
  
Could you modify the title to `[FLINK-6206] [runtime] Use LOG.error() when 
logging failure state changes`? THere's actually a JIRA ticket that covers this 
change.


---


[GitHub] flink pull request #5364: [FLINK-8472] [test] Extend all migration tests for...

2018-02-01 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5364#discussion_r165459894
  
--- Diff: 
flink-tests/src/test/java/org/apache/flink/test/state/operator/restore/keyed/KeyedJob.java
 ---
@@ -100,9 +100,7 @@ public static void main(String[] args) throws Exception 
{
.map(new StatefulStringStoringMap(mode, "first"))
.setParallelism(4);
 
-   if (mode == ExecutionMode.MIGRATE || mode == 
ExecutionMode.RESTORE) {
--- End diff --

yeah it should be alright to remove that, but let's chain the uid call to 
the operator creation as we do for the others for style points.


---


[GitHub] flink issue #5161: [FLINK-7608][metric] Refactor latency statistics metric

2018-01-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5161
  
Correction: We still can't display them in the UI since we have no tab for 
job metrics.


---


[GitHub] flink pull request #5163: [FLINK-8254][REST][docs] Bandaid generated documen...

2018-01-31 Thread zentol
Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/5163


---


[GitHub] flink issue #5163: [FLINK-8254][REST][docs] Bandaid generated documentation

2018-01-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5163
  
at this point we may just as well wait for the next flink-shaded release


---


[GitHub] flink pull request #5394: [FLINK-6571][tests] Catch InterruptedException in ...

2018-01-31 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5394

[FLINK-6571][tests] Catch InterruptedException in StreamSourceOperato…

## What is the purpose of the change

This PR resolves a test instability in the StreamSourceOperatorTest, where 
the `InfiniteSource` could fail due to an `InterruptedException`.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 6571

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5394.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5394


commit 453ba011b6b6beaf5102f3f376beb7c3a7260892
Author: zentol <chesnay@...>
Date:   2018-01-31T14:07:55Z

[FLINK-6571][tests] Catch InterruptedException in StreamSourceOperatorTest




---


[GitHub] flink issue #5155: [FLINK-4812][metrics] Expose currentLowWatermark for all ...

2018-01-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5155
  
@aljoscha I've implemented your suggestion, in hindsight that's a rather 
obvious solution isn't it. Also rebased the branch.


---


[GitHub] flink issue #5269: [FLINK-6004] [kinesis] Allow FlinkKinesisConsumer to skip...

2018-01-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5269
  
looks good, +1


---


[GitHub] flink pull request #5336: (release-1.4) [FLINK-8419] [kafka] Register metric...

2018-01-31 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5336#discussion_r165023129
  
--- Diff: 
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
 ---
@@ -92,21 +93,19 @@ public Kafka09Fetcher(
watermarksPunctuated,
processingTimeProvider,
autoWatermarkInterval,
-   userCodeClassLoader,
+   userCodeClassLoader.getParent(),
+   consumerMetricGroup,
useMetrics);
 
this.deserializer = deserializer;
this.handover = new Handover();
 
-   final MetricGroup kafkaMetricGroup = 
metricGroup.addGroup("KafkaConsumer");
-   addOffsetStateGauge(kafkaMetricGroup);
-
this.consumerThread = new KafkaConsumerThread(
LOG,
handover,
kafkaProperties,
unassignedPartitionsQueue,
-   kafkaMetricGroup,
+   subtaskMetricGroup, // TODO: the thread should 
expose Kafka-shipped metrics through the consumer metric group, not subtask 
metric group
--- End diff --

for 1.4 I would just remove the TODO since we won't fix it, but for 1.5 I 
would as you suggested register them twice.


---


[GitHub] flink pull request #5335: (master) [FLINK-8419] [kafka] Register metrics for...

2018-01-31 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5335#discussion_r165022709
  
--- Diff: 
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
 ---
@@ -95,21 +95,19 @@ public Kafka09Fetcher(
watermarksPunctuated,
processingTimeProvider,
autoWatermarkInterval,
-   userCodeClassLoader,
+   userCodeClassLoader.getParent(),
+   consumerMetricGroup,
useMetrics);
 
this.deserializer = deserializer;
this.handover = new Handover();
 
-   final MetricGroup kafkaMetricGroup = 
metricGroup.addGroup(KAFKA_CONSUMER_METRICS_GROUP);
-   addOffsetStateGauge(kafkaMetricGroup);
-
this.consumerThread = new KafkaConsumerThread(
LOG,
handover,
kafkaProperties,
unassignedPartitionsQueue,
-   kafkaMetricGroup,
+   subtaskMetricGroup, // TODO: the thread should 
expose Kafka-shipped metrics through the consumer metric group, not subtask 
metric group
--- End diff --

so why aren't we passing the consumerMetricGroup here?


---


[GitHub] flink pull request #5336: (release-1.4) [FLINK-8419] [kafka] Register metric...

2018-01-31 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5336#discussion_r165021513
  
--- Diff: 
flink-connectors/flink-connector-kafka-0.9/src/main/java/org/apache/flink/streaming/connectors/kafka/internal/Kafka09Fetcher.java
 ---
@@ -92,21 +93,19 @@ public Kafka09Fetcher(
watermarksPunctuated,
processingTimeProvider,
autoWatermarkInterval,
-   userCodeClassLoader,
+   userCodeClassLoader.getParent(),
+   consumerMetricGroup,
useMetrics);
 
this.deserializer = deserializer;
this.handover = new Handover();
 
-   final MetricGroup kafkaMetricGroup = 
metricGroup.addGroup("KafkaConsumer");
-   addOffsetStateGauge(kafkaMetricGroup);
-
this.consumerThread = new KafkaConsumerThread(
LOG,
handover,
kafkaProperties,
unassignedPartitionsQueue,
-   kafkaMetricGroup,
+   subtaskMetricGroup, // TODO: the thread should 
expose Kafka-shipped metrics through the consumer metric group, not subtask 
metric group
--- End diff --

so why aren't we passing the consumerMetricGroup here?


---


[GitHub] flink pull request #5336: (release-1.4) [FLINK-8419] [kafka] Register metric...

2018-01-31 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5336#discussion_r165021339
  
--- Diff: 
flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/AbstractFetcher.java
 ---
@@ -560,16 +585,11 @@ private void updateMinPunctuatedWatermark(Watermark 
nextWatermark) {
 
/**
 * Add current and committed offsets to metric group.
-*
-* @param metricGroup The metric group to use
 */
-   protected void addOffsetStateGauge(MetricGroup metricGroup) {
-   // add current offsets to gage
-   MetricGroup currentOffsets = 
metricGroup.addGroup("current-offsets");
-   MetricGroup committedOffsets = 
metricGroup.addGroup("committed-offsets");
-   for (KafkaTopicPartitionState ktp : 
subscribedPartitionStates) {
-   currentOffsets.gauge(ktp.getTopic() + "-" + 
ktp.getPartition(), new OffsetGauge(ktp, OffsetGaugeType.CURRENT_OFFSET));
-   committedOffsets.gauge(ktp.getTopic() + "-" + 
ktp.getPartition(), new OffsetGauge(ktp, OffsetGaugeType.COMMITTED_OFFSET));
+   protected void 
registerOffsetMetrics(List<KafkaTopicPartitionState> 
partitionOffsetStates) {
--- End diff --

make private?


---


[GitHub] flink pull request #5269: [FLINK-6004] [kinesis] Allow FlinkKinesisConsumer ...

2018-01-31 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5269#discussion_r165020094
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/test/java/org/apache/flink/streaming/connectors/kinesis/internals/KinesisDataFetcherTest.java
 ---
@@ -86,6 +88,56 @@ public void testIfNoShardsAreFoundShouldThrowException() 
throws Exception {
fetcher.runFetcher(); // this should throw RuntimeException
}
 
+   @Test
+   public void testSkipCorruptedRecord() throws Exception {
+   final String stream = "fakeStream";
+   final int numShards = 3;
+
+   final LinkedList testShardStates = new 
LinkedList<>();
+   final TestSourceContext sourceContext = new 
TestSourceContext<>();
+
+   final TestableKinesisDataFetcher fetcher = new 
TestableKinesisDataFetcher<>(
+   Collections.singletonList(stream),
+   sourceContext,
+   TestUtils.getStandardProperties(),
+   new KinesisDeserializationSchemaWrapper<>(new 
SimpleStringSchema()),
+   1,
+   0,
+   new AtomicReference<>(),
+   testShardStates,
+   new HashMap<>(),
+   
FakeKinesisBehavioursFactory.nonReshardedStreamsBehaviour(Collections.singletonMap(stream,
 numShards)));
+
+   // FlinkKinesisConsumer is responsible for setting up the 
fetcher before it can be run;
+   // run the consumer until it reaches the point where the 
fetcher starts to run
+   final DummyFlinkKafkaConsumer consumer = new 
DummyFlinkKafkaConsumer<>(TestUtils.getStandardProperties(), fetcher, 1, 0);
+
+   CheckedThread consumerThread = new CheckedThread() {
+   @Override
+   public void go() throws Exception {
+   consumer.run(new TestSourceContext<>());
+   }
+   };
+   consumerThread.start();
+
+   fetcher.waitUntilRun();
+   consumer.cancel();
+   consumerThread.sync();
+
+   assertEquals(numShards, testShardStates.size());
+
+   for (int i = 0; i < numShards; i++) {
+   fetcher.emitRecordAndUpdateState("record-" + i, 10L, i, 
new SequenceNumber("seq-num-1"));
+   assertEquals(new SequenceNumber("seq-num-1"), 
testShardStates.get(i).getLastProcessedSequenceNum());
+   assertEquals(new StreamRecord<>("record-" + i, 
10L), sourceContext.removeLatestOutput());
--- End diff --

indentation


---


[GitHub] flink pull request #5269: [FLINK-6004] [kinesis] Allow FlinkKinesisConsumer ...

2018-01-31 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5269#discussion_r165019903
  
--- Diff: 
flink-connectors/flink-connector-kinesis/src/main/java/org/apache/flink/streaming/connectors/kinesis/internals/KinesisDataFetcher.java
 ---
@@ -484,7 +484,10 @@ protected Properties getConsumerConfiguration() {
 */
protected final void emitRecordAndUpdateState(T record, long 
recordTimestamp, int shardStateIndex, SequenceNumber lastSequenceNumber) {
synchronized (checkpointLock) {
-   sourceContext.collectWithTimestamp(record, 
recordTimestamp);
+   if (record != null) {
+   sourceContext.collectWithTimestamp(record, 
recordTimestamp);
--- End diff --

Are we silently skipping the record or do we log _somewhere_ that a record 
was invalid?


---


[GitHub] flink issue #5161: [FLINK-7608][metric] Refactor latency statistics metric

2018-01-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5161
  
I've rebased the branch and did one more change:
```
this.latencyStats = new 
LatencyStats(this.metrics.parent().parent().addGroup("latency"), historySize, 
container.getIndexInSubtaskGroup(), getOperatorID());
```
===>
```
TaskManagerJobMetricGroup jobMetricGroup = this.metrics.parent().parent();
this.latencyStats = new LatencyStats(jobMetricGroup.addGroup("latency"), 
historySize, container.getIndexInSubtaskGroup(), getOperatorID());
```


---


[GitHub] flink issue #5343: [FLINK-8496][metrics] Create missing "Network" group

2018-01-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5343
  
merging.


---


[GitHub] flink issue #5161: [FLINK-7608][metric] Refactor latency statistics metric

2018-01-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5161
  
yes the display will now work.


---


[GitHub] flink issue #5292: [FLINK-8422] [core] Checkstyle for org.apache.flink.api.j...

2018-01-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5292
  
merging.


---


[GitHub] flink issue #5378: [FLINK-8489][ES] Prevent side-effects when modifying user...

2018-01-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5378
  
merging,


---


[GitHub] flink issue #5161: [FLINK-7608][metric] Refactor latency statistics metric

2018-01-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5161
  
I'm wary about using the operator name (or _any_ variant that inexplicably 
exists) as that significantly increases the size of watermarks.

I see the use-case, and ideally I would like to have both the operator name 
and id to both have a unique metric by default while supporting static names 
across jobs, but i don't see a way to do that without either a) blowing up 
watermarks even more or b) including an index of all operators (id+name) in the 
TDD and making that accessible.


---


[GitHub] flink pull request #5161: [FLINK-7608][metric] Refactor latency statistics m...

2018-01-31 Thread zentol
Github user zentol commented on a diff in the pull request:

https://github.com/apache/flink/pull/5161#discussion_r165000781
  
--- Diff: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/AbstractStreamOperator.java
 ---
@@ -194,14 +190,20 @@ public void setup(StreamTask containingTask, 
StreamConfig config, Output

[GitHub] flink issue #5384: [FLINK-8475][config][docs] Integrate akka options

2018-01-31 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5384
  
merging.


---


[GitHub] flink pull request #5392: [FLINK-8475][config][docs] Integrate JM options

2018-01-30 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5392

[FLINK-8475][config][docs] Integrate JM options

## What is the purpose of the change

This PR integrates the JobManager `ConfigOptions` into the configuration 
docs generator.

## Brief change log

* Add missing descriptions to config options (derived from existing 
description/javadocs)
* integrate jobmanager configuration table into `config.md` and separate 
job- and taskmanager sections

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_jm

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5392.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5392


commit 61bb0eff965cd066edebc5e4a167dd9bd3a07f77
Author: zentol <chesnay@...>
Date:   2018-01-22T16:32:38Z

[FLINK-8475][config][docs] Integrate JM options




---


[GitHub] flink pull request #5391: [FLINK-8475][config][docs] Integrate BlobServer op...

2018-01-30 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5391

[FLINK-8475][config][docs] Integrate BlobServer options

## What is the purpose of the change

This PR adds the BlobServer `ConfigOptions` to the full configuration 
reference.

## Brief change log

* Add missing descriptions to config options (derived from existing 
description/javadocs)
* integrate BlobServer  configuration table into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_blob

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5391.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5391


commit 4646481d5c398ee27c2c1600eb23e10009074e20
Author: zentol <chesnay@...>
Date:   2018-01-23T13:44:00Z

[FLINK-8475][config][docs] Integrate BlobServer options




---


[GitHub] flink issue #5384: [FLINK-8475][config][docs] Integrate akka options

2018-01-30 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5384
  
Note that the existing docs were wrong regardless, see AkkaUtils:
```
val startupTimeout = configuration.getString(
  AkkaOptions.STARTUP_TIMEOUT,
  (akkaAskTimeout * 10).toString)
```


---


[GitHub] flink issue #5384: [FLINK-8475][config][docs] Integrate akka options

2018-01-30 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5384
  
akka.startup-timeout has no default value because that's how the 
ConfigOption is actually defined; in other words ATM we can't guarantee 
anything about the default.


---


[GitHub] flink issue #5384: [FLINK-8475][config][docs] Integrate akka options

2018-01-30 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5384
  
@aljoscha Fixed.


---


[GitHub] flink pull request #5390: [FLINK-8475][config][docs] Integrate SlotManager o...

2018-01-30 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5390

[FLINK-8475][config][docs] Integrate SlotManager options

## What is the purpose of the change

This PR integrates the SlotManager `ConfigOptions` into the configuration 
docs generator.

## Brief change log

* Add `ConfigGroup` for slotmanager config options
* integrate slotmanager configuration table into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_sm

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5390.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5390


commit 8b4efb9f09cd2d2ba3d68827e4c3c990e9bc2ceb
Author: zentol <chesnay@...>
Date:   2018-01-22T15:40:20Z

[FLINK-8475][config][docs] Integrate SlotManager options




---


[GitHub] flink pull request #5389: [FLINK-8475][config][docs] Integrate REST options

2018-01-30 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5389

[FLINK-8475][config][docs] Integrate REST options

## What is the purpose of the change

This PR adds the REST `ConfigOptions` to the full configuration reference.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_rest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5389.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5389


commit 32a40e1a7b549e09e5b5a8e62dc6d155c0a80916
Author: zentol <chesnay@...>
Date:   2018-01-23T13:52:22Z

[FLINK-8475][config][docs] Integrate REST options




---


[GitHub] flink pull request #5387: [FLINK-8475][config][docs] Integrate optimizer opt...

2018-01-30 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5387

[FLINK-8475][config][docs] Integrate optimizer options

## What is the purpose of the change

This PR ports the batch compiler ConfigConstants to `ConfigOptions` and 
integrates them into the configuration docs generator.

## Brief change log

* port compiler config constants to config options
* Add missing descriptions to config options (derived from existing 
description/javadocs)
* integrate compiler configuration table into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_optimizer

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5387.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5387


commit 68ec1895d27da93107f01a500cbe6e59a05915b1
Author: zentol <chesnay@...>
Date:   2018-01-23T12:20:08Z

[FLINK-8475][config][docs] Integrate optimizer options




---


[GitHub] flink pull request #5386: [FLINK-8475][config][docs] Integrate netty options

2018-01-30 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5386

[FLINK-8475][config][docs] Integrate netty options

## What is the purpose of the change

This PR integrates the Netty `ConfigOptions` into the configuration docs 
generator.

## Brief change log

* extend generator configuration to pick up `NettyConfig` class in 
flink-runtime
* update generator file matching to accept `NettyConfig`
* Add missing descriptions to config options (derived from existing 
description/javadocs)
* integrate netty configuration table into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_netty

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5386.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5386


commit a98197824f47dcaf1e3f3795762ce4167657ebb0
Author: zentol <chesnay@...>
Date:   2018-01-22T16:02:12Z

[FLINK-8475][config][docs] Integrate netty options




---


[GitHub] flink pull request #5385: [FLINK-8475][config][docs] Integrate SSL options

2018-01-30 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5385

[FLINK-8475][config][docs] Integrate SSL options

## What is the purpose of the change

This PR integrates the SSL `ConfigOptions` into the configuration docs 
generator.

## Brief change log

* Add missing descriptions to config options (derived from existing 
description/javadocs)
* integrate ssl configuration table into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_ssl

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5385.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5385


commit 6465e30d9e3905cd4dac114f2f6df869ffb66f63
Author: zentol <chesnay@...>
Date:   2018-01-22T15:35:19Z

[FLINK-8475][config][docs] Integrate SSL options




---


[GitHub] flink pull request #5384: [FLINK-8475][config][docs] Integrate akka options

2018-01-30 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5384

[FLINK-8475][config][docs] Integrate akka options

## What is the purpose of the change

This PR integrates the Akka `ConfigOptions` into the configuration docs 
generator.

## Brief change log

* Add missing descriptions to config options (derived from existing 
description/javadocs)
* integrate akka configuration table into `config.md`

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8475_akka

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5384.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5384


commit f61db3f46ca83913be3eaa9b2d04c4b25eb9e2cc
Author: zentol <chesnay@...>
Date:   2018-01-22T15:29:55Z

[FLINK-8475][config][docs] Integrate akka options




---


[GitHub] flink issue #5382: [FLINK-8524][JavaDoc] Fix JavaDoc for TypeExtractor.getBi...

2018-01-30 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5382
  
merging.


---


[GitHub] flink issue #5379: [FLINK-8130][docs] Fix snapshot javadoc link

2018-01-30 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5379
  
merging.


---


[GitHub] flink pull request #5340: [FLINK-8475][config][docs] Integrate more ConfigOp...

2018-01-30 Thread zentol
Github user zentol closed the pull request at:

https://github.com/apache/flink/pull/5340


---


[GitHub] flink issue #5340: [FLINK-8475][config][docs] Integrate more ConfigOptions i...

2018-01-30 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5340
  
Will split this PR into smaller chunks to make it easier to review.


---


[GitHub] flink issue #5357: [hotfix][JobGraph] Eliminate the conditions of parallelis...

2018-01-29 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5357
  
I would be in favor of closing this PR. This change doesn't _really_ 
improve anything, but removes a simple (and intuitive) sanity check.


---


[GitHub] flink issue #5378: [FLINK-8489][ES] Prevent side-effects when modifying user...

2018-01-29 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5378
  
Added a test for the `ElasticSearchSinkBase` that passes an unmodifiable 
map containing all properties that we currently try to remove.

I also modified the `ElasticSearchTestBase` to pass an unmodifiable map to 
the constructors.


---


[GitHub] flink issue #5377: [FLINK-8494][config] Migrate CC#DEFAULT_PARALLELISM_KEY

2018-01-29 Thread zentol
Github user zentol commented on the issue:

https://github.com/apache/flink/pull/5377
  
merging.


---


[GitHub] flink pull request #5379: [FLINK-8130][docs] Fix snapshot javadoc link

2018-01-29 Thread zentol
GitHub user zentol opened a pull request:

https://github.com/apache/flink/pull/5379

[FLINK-8130][docs] Fix snapshot javadoc link

## What is the purpose of the change

This PR fixes the snapshot javadoc link that is currently broken. Currently 
the website tries to access 
`https://ci.apache.org/projects/flink/flink-docs-release-1.5-SNAPSHOT/api/java/`
 but the working link is 
`https://ci.apache.org/projects/flink/flink-docs-release-1.5/api/java/`.

This was broken by accident when switching master to 1.5 .

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zentol/flink 8130

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5379.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5379


commit 3d66cdc80fd74723d82b4bfc55065eef8a76684a
Author: zentol <chesnay@...>
Date:   2018-01-29T12:29:23Z

[FLINK-8130][docs] Fix snapshot javadoc link




---


<    9   10   11   12   13   14   15   16   17   18   >