[GitHub] [flink-statefun] igalshilman edited a comment on issue #30: [FLINK-16226] Add Backpressure to HttpFunction

2020-02-21 Thread GitBox
igalshilman edited a comment on issue #30: [FLINK-16226] Add Backpressure to 
HttpFunction
URL: https://github.com/apache/flink-statefun/pull/30#issuecomment-589930328
 
 
   Now that I hope it is clearer, I’m open for suggestions


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] igalshilman commented on issue #30: [FLINK-16226] Add Backpressure to HttpFunction

2020-02-21 Thread GitBox
igalshilman commented on issue #30: [FLINK-16226] Add Backpressure to 
HttpFunction
URL: https://github.com/apache/flink-statefun/pull/30#issuecomment-589930328
 
 
   Knowing that, I’m open for suggestions


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] igalshilman edited a comment on issue #30: [FLINK-16226] Add Backpressure to HttpFunction

2020-02-21 Thread GitBox
igalshilman edited a comment on issue #30: [FLINK-16226] Add Backpressure to 
HttpFunction
URL: https://github.com/apache/flink-statefun/pull/30#issuecomment-589930100
 
 
   I think I understand from where the confusion comes from.
   requstState (see field comment)
   Has technically two states: (to reduce StateHandle count)
   1. A flag rather there is something in flight 
   (NULL means nothing, 0 means there is a batch in flight)
   2. The second state that is tracked is, the size of the currently 
accumulating batch (regardless of what is on the wire) 
   
   The semantics of the property as defined in module.yaml only limits the 
batch size that a remote function can receive.
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] igalshilman commented on issue #30: [FLINK-16226] Add Backpressure to HttpFunction

2020-02-21 Thread GitBox
igalshilman commented on issue #30: [FLINK-16226] Add Backpressure to 
HttpFunction
URL: https://github.com/apache/flink-statefun/pull/30#issuecomment-589930100
 
 
   I think I understand from where the confusion comes from.
   requstState (see field comment)
   Has technically two states: (to reduce StateHandle count)
   1. A flag rather there is something in flight 
   (NULL means nothing, 0 means there is a batch in flight)
   2. The second state that is tracked is, the size of the currently 
accumulating batch (regardless of what is on the wire) 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-02-21 Thread GitBox
libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] 
Translate Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#discussion_r382894111
 
 

 ##
 File path: docs/dev/libs/cep.zh.md
 ##
 @@ -136,140 +132,143 @@ val result: DataStream[Alert] = patternStream.process(
 
 
 
-## The Pattern API
+## 模式API
 
-The pattern API allows you to define complex pattern sequences that you want 
to extract from your input stream.
+模式API可以让你定义想从输入流中抽取的复杂模式序列。
 
-Each complex pattern sequence consists of multiple simple patterns, i.e. 
patterns looking for individual events with the same properties. From now on, 
we will call these simple patterns **patterns**, and the final complex pattern 
sequence we are searching for in the stream, the **pattern sequence**. You can 
see a pattern sequence as a graph of such patterns, where transitions from one 
pattern to the next occur based on user-specified
-*conditions*, e.g. `event.getName().equals("end")`. A **match** is a sequence 
of input events which visits all
-patterns of the complex pattern graph, through a sequence of valid pattern 
transitions.
+每个复杂的模式序列包括多个简单的模式,比如,寻找拥有相同属性事件序列的模式。从现在开始,我们把这些简单的模式称作**模式**,
+把我们在数据流中最终寻找的复杂模式序列称作**模式序列**,你可以把模式序列看作是这样的模式构成的图,
+这些模式基于用户指定的**条件**从一个转换到另外一个,比如 `event.getName().equals("end")`。
+一个**匹配**是输入事件的一个序列,这些事件通过一系列有效的模式转换,能够访问到复杂模式图中的所有模式。
 
-{% warn Attention %} Each pattern must have a unique name, which you use later 
to identify the matched events.
+{% warn Attention %} 每个模式必须有一个独一无二的名字,你可以在后面使用它来识别匹配到的事件。
 
-{% warn Attention %} Pattern names **CANNOT** contain the character `":"`.
+{% warn Attention %} 模式的名字不能包含字符`":"`.
 
-In the rest of this section we will first describe how to define [Individual 
Patterns](#individual-patterns), and then how you can combine individual 
patterns into [Complex Patterns](#combining-patterns).
+这一节的剩余部分我们会先讲述如何定义[单个模式](#单个模式),然后讲如何将单个模式组合成[复杂模式](#组合模式)。
 
-### Individual Patterns
+### 单个模式
 
-A **Pattern** can be either a *singleton* or a *looping* pattern. Singleton 
patterns accept a single
-event, while looping patterns can accept more than one. In pattern matching 
symbols, the pattern `"a b+ c? d"` (or `"a"`, followed by *one or more* 
`"b"`'s, optionally followed by a `"c"`, followed by a `"d"`), `a`, `c?`, and 
`d` are
-singleton patterns, while `b+` is a looping one. By default, a pattern is a 
singleton pattern and you can transform
-it to a looping one by using [Quantifiers](#quantifiers). Each pattern can 
have one or more
-[Conditions](#conditions) based on which it accepts events.
+一个**模式**可以是一个**单例**或者**循环**模式。单例模式只接受一个事件,循环模式可以接受多个事件。
+在模式匹配表达式中,模式`"a b+ c? 
d"`(或者`"a"`,后面跟着一个或者多个`"b"`,再往后可选择的跟着一个`"c"`,最后跟着一个`"d"`),
+`a`,`c?`,和 `d`都是单例模式,`b+`是一个循环模式。默认情况下,模式都是单例的,你可以通过使用[量词](#量词)把它们转换成循环模式。
+每个模式可以有一个或者多个[条件](#条件)来决定它接受哪些事件。
 
- Quantifiers
+ 量词
 
-In FlinkCEP, you can specify looping patterns using these methods: 
`pattern.oneOrMore()`, for patterns that expect one or more occurrences of a 
given event (e.g. the `b+` mentioned before); and `pattern.times(#ofTimes)`, 
for patterns that
-expect a specific number of occurrences of a given type of event, e.g. 4 
`a`'s; and `pattern.times(#fromTimes, #toTimes)`, for patterns that expect a 
specific minimum number of occurrences and a maximum number of occurrences of a 
given type of event, e.g. 2-4 `a`s.
+在FlinkCEP中,你可以通过这些方法指定循环模式:`pattern.oneOrMore()`,指定期望一个给定事件出现一次或者多次的模式(例如前面提到的`b+`模式);
+`pattern.times(#ofTimes)`,指定期望一个给定事件出现特定次数的模式,例如出现4次`a`;
+`pattern.times(#fromTimes, 
#toTimes)`,指定期望一个给定事件出现次数在一个最小值和最大值中间的模式,比如出现2-4次`a`。
 
-You can make looping patterns greedy using the `pattern.greedy()` method, but 
you cannot yet make group patterns greedy. You can make all patterns, looping 
or not, optional using the `pattern.optional()` method.
+你可以使用`pattern.greedy()`方法让循环模式变成贪心的,但现在还不能让模式组贪心。
+你可以使用`pattern.optional()`方法让所有的模式变成可选的,不管是否是循环模式。
 
-For a pattern named `start`, the following are valid quantifiers:
+对一个命名为`start`的模式,以下量词是有效的:
 
  
  
  {% highlight java %}
- // expecting 4 occurrences
+ // 期望出现4次
  start.times(4);
 
- // expecting 0 or 4 occurrences
+ // 期望出现0或者4次
  start.times(4).optional();
 
- // expecting 2, 3 or 4 occurrences
+ // 期望出现2、3或者4次
  start.times(2, 4);
 
- // expecting 2, 3 or 4 occurrences and repeating as many as possible
+ // 期望出现2、3或者4次,并且尽可能的重复次数多
  start.times(2, 4).greedy();
 
- // expecting 0, 2, 3 or 4 occurrences
+ // 期望出现0、2、3或者4次
  start.times(2, 4).optional();
 
- // expecting 0, 2, 3 or 4 occurrences and repeating as many as possible
+ // 期望出现0、2、3或者4次,并且尽可能的重复次数多
  start.times(2, 4).optional().greedy();
 
- // expecting 1 or more occurrences
+ // 期望出现1到多次
  start.oneOrMore();
 
- // expecting 1 or more occurrences and repeating as many as possible
+ // 期望出现1到多次,并且尽可能的重复次数多
  start.oneOrMore().greedy();
 
- // expecting 0 or more occurrences
+ // 期望出现0到多次
  start.oneOrMore().optional();
 
- // expecting 0 

[GitHub] [flink] libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-02-21 Thread GitBox
libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] 
Translate Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#discussion_r382893781
 
 

 ##
 File path: docs/dev/libs/cep.zh.md
 ##
 @@ -63,13 +60,12 @@ add the FlinkCEP dependency to the `pom.xml` of your 
project.
 
 
 
-{% info %} FlinkCEP is not part of the binary distribution. See how to link 
with it for cluster execution 
[here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
+{% info %} 
FlinkCEP不是二进制分发的一部分。在集群上执行如何链接它可以看[这里]({{site.baseurl}}/dev/projectsetup/dependencies.html)。
 
 Review comment:
   how about "二进制发行版” ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-02-21 Thread GitBox
libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] 
Translate Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#discussion_r382894320
 
 

 ##
 File path: docs/dev/libs/cep.zh.md
 ##
 @@ -665,12 +647,11 @@ pattern.oneOrMore().greedy()
 
 
 
-### Combining Patterns
+### 组合模式
 
-Now that you've seen what an individual pattern can look like, it is time to 
see how to combine them
-into a full pattern sequence.
+现在你已经看到单个的模式是什么样的了,改取看看如何把它们连接起来组成一个完整的模式序列。
 
 Review comment:
   改取 -> 该去


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-02-21 Thread GitBox
libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] 
Translate Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#discussion_r382893837
 
 

 ##
 File path: docs/dev/libs/cep.zh.md
 ##
 @@ -23,23 +23,20 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-FlinkCEP is the Complex Event Processing (CEP) library implemented on top of 
Flink.
-It allows you to detect event patterns in an endless stream of events, giving 
you the opportunity to get hold of what's important in your
-data.
+FlinkCEP是在Flink上层实现的复杂事件处理库。
+它可以让你在无限事件流中检测出特定的事件模型,有机会掌握数据中重要的那部分。
 
-This page describes the API calls available in Flink CEP. We start by 
presenting the [Pattern API](#the-pattern-api),
-which allows you to specify the patterns that you want to detect in your 
stream, before presenting how you can
-[detect and act upon matching event sequences](#detecting-patterns). We then 
present the assumptions the CEP
-library makes when [dealing with lateness](#handling-lateness-in-event-time) 
in event time and how you can
-[migrate your job](#migrating-from-an-older-flink-versionpre-13) from an older 
Flink version to Flink-1.3.
+本页讲述了Flink 
CEP中可用的API,我们首先讲述[模式API](#模式api),它可以让你指定想在数据流中检测的模式,然后讲述如何[检测匹配的事件序列并进行处理](#检测模式)。
+再然后我们讲述Flink在按照事件时间[处理迟到事件](#按照事件时间处理晚到事件)时的假设,
+以及如何从旧版本的Flink向1.3之后的版本[迁移作业](#从旧版本迁移13之前)。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Getting Started
+## 开始
 
-If you want to jump right in, [set up a Flink program]({{ site.baseurl 
}}/dev/projectsetup/dependencies.html) and
-add the FlinkCEP dependency to the `pom.xml` of your project.
+如果你想现在开始尝试,[创建一个Flink程序]({{ site.baseurl 
}}/dev/projectsetup/dependencies.html),
 
 Review comment:
   +n to all other links


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-02-21 Thread GitBox
libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] 
Translate Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#discussion_r382893889
 
 

 ##
 File path: docs/dev/libs/cep.zh.md
 ##
 @@ -63,13 +60,12 @@ add the FlinkCEP dependency to the `pom.xml` of your 
project.
 
 
 
-{% info %} FlinkCEP is not part of the binary distribution. See how to link 
with it for cluster execution 
[here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
+{% info %} 
FlinkCEP不是二进制分发的一部分。在集群上执行如何链接它可以看[这里]({{site.baseurl}}/dev/projectsetup/dependencies.html)。
 
-Now you can start writing your first CEP program using the Pattern API.
+现在可以开始使用Pattern API写你的第一个CEP程序了。
 
-{% warn Attention %} The events in the `DataStream` to which
-you want to apply pattern matching must implement proper `equals()` and 
`hashCode()` methods
-because FlinkCEP uses them for comparing and matching events.
+{% warn Attention %} `DataStream`中的事件,如果你想在上面进行模式匹配的话,必须实现合适的 
`equals()`和`hashCode()`方法,
 
 Review comment:
   Attention -> 注意


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] Translate Event Processing (CEP) page into Chinese

2020-02-21 Thread GitBox
libenchao commented on a change in pull request #11168: [FLINK-16140] [docs-zh] 
Translate Event Processing (CEP) page into Chinese
URL: https://github.com/apache/flink/pull/11168#discussion_r382893750
 
 

 ##
 File path: docs/dev/libs/cep.zh.md
 ##
 @@ -23,23 +23,20 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-FlinkCEP is the Complex Event Processing (CEP) library implemented on top of 
Flink.
-It allows you to detect event patterns in an endless stream of events, giving 
you the opportunity to get hold of what's important in your
-data.
+FlinkCEP是在Flink上层实现的复杂事件处理库。
+它可以让你在无限事件流中检测出特定的事件模型,有机会掌握数据中重要的那部分。
 
-This page describes the API calls available in Flink CEP. We start by 
presenting the [Pattern API](#the-pattern-api),
-which allows you to specify the patterns that you want to detect in your 
stream, before presenting how you can
-[detect and act upon matching event sequences](#detecting-patterns). We then 
present the assumptions the CEP
-library makes when [dealing with lateness](#handling-lateness-in-event-time) 
in event time and how you can
-[migrate your job](#migrating-from-an-older-flink-versionpre-13) from an older 
Flink version to Flink-1.3.
+本页讲述了Flink 
CEP中可用的API,我们首先讲述[模式API](#模式api),它可以让你指定想在数据流中检测的模式,然后讲述如何[检测匹配的事件序列并进行处理](#检测模式)。
+再然后我们讲述Flink在按照事件时间[处理迟到事件](#按照事件时间处理晚到事件)时的假设,
+以及如何从旧版本的Flink向1.3之后的版本[迁移作业](#从旧版本迁移13之前)。
 
 * This will be replaced by the TOC
 {:toc}
 
-## Getting Started
+## 开始
 
-If you want to jump right in, [set up a Flink program]({{ site.baseurl 
}}/dev/projectsetup/dependencies.html) and
-add the FlinkCEP dependency to the `pom.xml` of your project.
+如果你想现在开始尝试,[创建一个Flink程序]({{ site.baseurl 
}}/dev/projectsetup/dependencies.html),
 
 Review comment:
   Add `/zh` to the link.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11127: [FLINK-16081][docs] Translate /dev/table/index.zh.md

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11127: [FLINK-16081][docs] Translate 
/dev/table/index.zh.md
URL: https://github.com/apache/flink/pull/11127#issuecomment-587544521
 
 
   
   ## CI report:
   
   * 22ee2bb7027f9253a5479e5df4c83e8a4c9809a8 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149478620) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5287)
 
   * c9aaa06a3c6ba5e7b259668f17c0ed4c9d3e999c Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150129021) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5453)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11181: [FLINK-16230][tests]Use LinkedHashSet instead of HashSet

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11181: [FLINK-16230][tests]Use 
LinkedHashSet instead of HashSet
URL: https://github.com/apache/flink/pull/11181#issuecomment-589919323
 
 
   
   ## CI report:
   
   * 1adaf381144df070b76f311652607fe295842863 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150128180) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11127: [FLINK-16081][docs] Translate /dev/table/index.zh.md

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11127: [FLINK-16081][docs] Translate 
/dev/table/index.zh.md
URL: https://github.com/apache/flink/pull/11127#issuecomment-587544521
 
 
   
   ## CI report:
   
   * 22ee2bb7027f9253a5479e5df4c83e8a4c9809a8 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149478620) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5287)
 
   * c9aaa06a3c6ba5e7b259668f17c0ed4c9d3e999c Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150129021) Azure: 
[PENDING](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5453)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11181: [FLINK-16230][tests]Use LinkedHashSet instead of HashSet

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11181: [FLINK-16230][tests]Use 
LinkedHashSet instead of HashSet
URL: https://github.com/apache/flink/pull/11181#issuecomment-589919323
 
 
   
   ## CI report:
   
   * 1adaf381144df070b76f311652607fe295842863 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150128180) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11127: [FLINK-16081][docs] Translate /dev/table/index.zh.md

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11127: [FLINK-16081][docs] Translate 
/dev/table/index.zh.md
URL: https://github.com/apache/flink/pull/11127#issuecomment-587544521
 
 
   
   ## CI report:
   
   * 22ee2bb7027f9253a5479e5df4c83e8a4c9809a8 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149478620) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5287)
 
   * c9aaa06a3c6ba5e7b259668f17c0ed4c9d3e999c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11181: [FLINK-16230][tests]Use LinkedHashSet instead of HashSet

2020-02-21 Thread GitBox
flinkbot commented on issue #11181: [FLINK-16230][tests]Use LinkedHashSet 
instead of HashSet
URL: https://github.com/apache/flink/pull/11181#issuecomment-589919323
 
 
   
   ## CI report:
   
   * 1adaf381144df070b76f311652607fe295842863 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11180: [FLINK-16220][json] Fix JsonRowSerializationSchema cast exception due…

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11180: [FLINK-16220][json] Fix 
JsonRowSerializationSchema cast exception due…
URL: https://github.com/apache/flink/pull/11180#issuecomment-589914664
 
 
   
   ## CI report:
   
   * 56c928ea39dc0c1b9de3e2669d8e48994b208010 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/150126022) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11128: [FLINK-16082][docs] Translate /dev/table/streaming/index.zh.md

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11128: [FLINK-16082][docs] Translate 
/dev/table/streaming/index.zh.md
URL: https://github.com/apache/flink/pull/11128#issuecomment-587563150
 
 
   
   ## CI report:
   
   * c17ca315c74804ebe0f81dcfac7dfead7e83b590 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149486099) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5289)
 
   * a3208ce24928eae56847b3b11c7b5ea3efc7c7a3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai commented on a change in pull request #28: [FLINK-16159] [tests, build] Add verification integration test + integrate with Maven build

2020-02-21 Thread GitBox
tzulitai commented on a change in pull request #28: [FLINK-16159] [tests, 
build] Add verification integration test + integrate with Maven build
URL: https://github.com/apache/flink-statefun/pull/28#discussion_r382886088
 
 

 ##
 File path: 
statefun-integration-tests/statefun-sanity-itcase/src/test/java/org/apache/flink/statefun/itcases/sanity/SanityVerificationITCase.java
 ##
 @@ -0,0 +1,287 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.statefun.itcases.sanity;
+
+import static org.hamcrest.CoreMatchers.hasItems;
+import static org.hamcrest.MatcherAssert.assertThat;
+
+import java.nio.file.Paths;
+import java.time.Duration;
+import java.util.ArrayList;
+import java.util.Collections;
+import java.util.List;
+import java.util.Map;
+import java.util.Properties;
+import java.util.concurrent.Callable;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
+import java.util.concurrent.Future;
+import java.util.concurrent.TimeUnit;
+import 
org.apache.flink.statefun.itcases.sanity.generated.VerificationMessages.Command;
+import 
org.apache.flink.statefun.itcases.sanity.generated.VerificationMessages.FnAddress;
+import 
org.apache.flink.statefun.itcases.sanity.generated.VerificationMessages.Modify;
+import 
org.apache.flink.statefun.itcases.sanity.generated.VerificationMessages.Noop;
+import 
org.apache.flink.statefun.itcases.sanity.generated.VerificationMessages.Send;
+import 
org.apache.flink.statefun.itcases.sanity.generated.VerificationMessages.StateSnapshot;
+import org.apache.kafka.clients.consumer.Consumer;
+import org.apache.kafka.clients.consumer.ConsumerRecord;
+import org.apache.kafka.clients.consumer.ConsumerRecords;
+import org.apache.kafka.clients.consumer.KafkaConsumer;
+import org.apache.kafka.clients.producer.KafkaProducer;
+import org.apache.kafka.clients.producer.Producer;
+import org.apache.kafka.clients.producer.ProducerRecord;
+import org.apache.kafka.common.serialization.Deserializer;
+import org.apache.kafka.common.serialization.Serializer;
+import org.junit.Rule;
+import org.junit.Test;
+import org.testcontainers.containers.GenericContainer;
+import org.testcontainers.containers.KafkaContainer;
+import org.testcontainers.containers.Network;
+import org.testcontainers.images.builder.ImageFromDockerfile;
+
+/**
+ * Sanity verification integration test based on the {@link 
SanityVerificationModule} application.
+ *
+ * The integration test setups Kafka brokers and the verification 
application using Docker, sends
+ * a few commands to Kafka to be consumed by the application, and finally 
verifies that outputs sent
+ * to Kafka from the application are correct.
+ */
+public class SanityVerificationITCase {
+
+  private static final String CONFLUENT_PLATFORM_VERSION = "5.0.3";
+
+  private static final ImageFromDockerfile verificationAppImage =
+  new ImageFromDockerfile("statefun-sanity-itcase")
+  .withFileFromClasspath("Dockerfile", "Dockerfile")
+  .withFileFromPath(".", Paths.get(System.getProperty("user.dir") + 
"/target/"));
+
+  @Rule public Network network = Network.newNetwork();
+
+  @Rule
+  public KafkaContainer kafka =
+  new KafkaContainer(CONFLUENT_PLATFORM_VERSION)
+  .withNetwork(network)
+  .withNetworkAliases("kafka-broker");
+
+  @Rule
+  public GenericContainer verificationAppMaster =
+  new GenericContainer(verificationAppImage)
+  .dependsOn(kafka)
+  .withNetwork(network)
+  .withNetworkAliases("master")
+  .withEnv("ROLE", "master")
+  .withEnv("MASTER_HOST", "master");
+
+  @Rule
+  public GenericContainer verificationAppWorker =
+  new GenericContainer(verificationAppImage)
+  .dependsOn(kafka, verificationAppMaster)
+  .withNetwork(network)
+  .withNetworkAliases("worker")
+  .withEnv("ROLE", "worker")
+  .withEnv("MASTER_HOST", "master");
+
+  @Test
+  public void run() throws Exception {
+final String kafkaAddress = kafka.getBootstrapServers();
+final ExecutorService kafkaIoExecutor = Executors.newCachedThreadPool();
+
+kafkaIoExecutor.submit(new ProduceCommands(kafkaAddress));
+

[GitHub] [flink] buptljy commented on a change in pull request #11179: [FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and remove remaining bits of "legacy state"

2020-02-21 Thread GitBox
buptljy commented on a change in pull request #11179: 
[FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and 
remove remaining bits of "legacy state"
URL: https://github.com/apache/flink/pull/11179#discussion_r382885940
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/savepoint/SavepointSerializer.java
 ##
 @@ -52,6 +37,5 @@
 * @return The deserialized savepoint
 * @throws IOException Serialization failures are forwarded
 */
-   T deserialize(DataInputStream dis, ClassLoader userCodeClassLoader) 
throws IOException;
-
+   SavepointV2 deserialize(DataInputStream dis, ClassLoader 
userCodeClassLoader) throws IOException;
 
 Review comment:
   Should we bind #SavepointSerializer to #SavepointV2 ? 
   In this way, we have to create a new interface if we want to add 
#SavepointV3 in the future. Is this expected?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong merged pull request #11128: [FLINK-16082][docs] Translate /dev/table/streaming/index.zh.md

2020-02-21 Thread GitBox
wuchong merged pull request #11128: [FLINK-16082][docs] Translate 
/dev/table/streaming/index.zh.md
URL: https://github.com/apache/flink/pull/11128
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-16082) Translate "Overview" page of "Streaming Concepts" into Chinese

2020-02-21 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-16082.
-
Fix Version/s: 1.11.0
   Resolution: Fixed

Resolved in master (1.11.0): aa048ae3b648e802f37c5698ae30a266656b7da2

> Translate "Overview" page of "Streaming Concepts" into Chinese
> --
>
> Key: FLINK-16082
> URL: https://issues.apache.org/jira/browse/FLINK-16082
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Benchao Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/streaming/
> The markdown file is located in {{flink/docs/dev/table/streaming/index.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] buptljy commented on a change in pull request #11179: [FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and remove remaining bits of "legacy state"

2020-02-21 Thread GitBox
buptljy commented on a change in pull request #11179: 
[FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and 
remove remaining bits of "legacy state"
URL: https://github.com/apache/flink/pull/11179#discussion_r382885940
 
 

 ##
 File path: 
flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/savepoint/SavepointSerializer.java
 ##
 @@ -52,6 +37,5 @@
 * @return The deserialized savepoint
 * @throws IOException Serialization failures are forwarded
 */
-   T deserialize(DataInputStream dis, ClassLoader userCodeClassLoader) 
throws IOException;
-
+   SavepointV2 deserialize(DataInputStream dis, ClassLoader 
userCodeClassLoader) throws IOException;
 
 Review comment:
   Should we bind #SavepointSerializer to #SavepointV2 ? 
   In this way, we have to create a new interface if we want to add 
#SavepointV3 in the future. Is that expected?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] libenchao commented on issue #11127: [FLINK-16081][docs] Translate /dev/table/index.zh.md

2020-02-21 Thread GitBox
libenchao commented on issue #11127: [FLINK-16081][docs] Translate 
/dev/table/index.zh.md
URL: https://github.com/apache/flink/pull/11127#issuecomment-589917841
 
 
   @JingsongLi Thanks for your review, I've addressed your comments.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16105) Translate "User-defined Sources & Sinks" page of "Table API & SQL" into Chinese

2020-02-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16105:
---
Labels: pull-request-available  (was: )

> Translate "User-defined Sources & Sinks" page of "Table API & SQL" into 
> Chinese 
> 
>
> Key: FLINK-16105
> URL: https://issues.apache.org/jira/browse/FLINK-16105
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Benchao Li
>Priority: Major
>  Labels: pull-request-available
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/sourceSinks.html
> The markdown file is located in {{flink/docs/dev/table/sourceSinks.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong closed pull request #11114: FLINK-16105 Translate "User-defined Sources & Sinks" page of "Table API & SQL" into Chinese

2020-02-21 Thread GitBox
wuchong closed pull request #4: FLINK-16105 Translate "User-defined Sources 
& Sinks" page of "Table API & SQL" into Chinese
URL: https://github.com/apache/flink/pull/4
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] wuchong commented on issue #11114: FLINK-16105 Translate "User-defined Sources & Sinks" page of "Table API & SQL" into Chinese

2020-02-21 Thread GitBox
wuchong commented on issue #4: FLINK-16105 Translate "User-defined Sources 
& Sinks" page of "Table API & SQL" into Chinese
URL: https://github.com/apache/flink/pull/4#issuecomment-589917546
 
 
   Thanks @zgq25302111 , then I will close this PR. Please open another PR when 
translation is ready :) Thank you.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-16220) JsonRowSerializationSchema throws cast exception : NullNode cannot be cast to ArrayNode

2020-02-21 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16220:
---

Assignee: Benchao Li

> JsonRowSerializationSchema throws cast exception : NullNode cannot be cast to 
> ArrayNode
> ---
>
> Key: FLINK-16220
> URL: https://issues.apache.org/jira/browse/FLINK-16220
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Benchao Li
>Assignee: Benchao Li
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> It's because the object reuse. For the below schema:
> {code:java}
> create table sink {
>   col1 int,
>   col2 array
> }{code}
> if col2 is null, then the reused object will be {{NullNode}}. for the next 
> record, if it's not null, we will cast the reused object {{NullNode}} to 
> {{ArrayNode}}, which will throw cast exception.
>  
> cc [~jark] [~twalthr] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16230) Use LinkedHashSet instead of HashSet for a deterministic order when testing serialization

2020-02-21 Thread cpugputpu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

cpugputpu updated FLINK-16230:
--
Summary: Use LinkedHashSet instead of HashSet for a deterministic order 
when testing serialization  (was: Use LinkedHashMap instead of HashMap for a 
deterministic order when testing serialization)

> Use LinkedHashSet instead of HashSet for a deterministic order when testing 
> serialization
> -
>
> Key: FLINK-16230
> URL: https://issues.apache.org/jira/browse/FLINK-16230
> Project: Flink
>  Issue Type: Bug
> Environment: TEST: 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoGenericTypeSerializerTest#testJavaSet
> StackTrace:
> java.util.HashMap$HashIterator$HashIteratorShuffler.
> java.util.HashMap$HashIterator.(HashMap.java:1435)
> java.util.HashMap$KeyIterator.(HashMap.java:1467)
> java.util.HashMap$KeySet.iterator(HashMap.java:917)
> java.util.HashSet.iterator(HashSet.java:173)
> org.apache.flink.testutils.DeeplyEqualsChecker.deepEqualsIterable(DeeplyEqualsChecker.java:107)
> org.apache.flink.testutils.DeeplyEqualsChecker.deepEquals0(DeeplyEqualsChecker.java:94)
> org.apache.flink.testutils.DeeplyEqualsChecker.lambda$deepEquals$0(DeeplyEqualsChecker.java:79)
> java.util.Optional.orElseGet(Optional.java:267)
> org.apache.flink.testutils.DeeplyEqualsChecker.deepEquals(DeeplyEqualsChecker.java:79)
> org.apache.flink.testutils.CustomEqualityMatcher.matches(CustomEqualityMatcher.java:63)
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12)
> org.junit.Assert.assertThat(Assert.java:956)
> org.apache.flink.api.common.typeutils.SerializerTestBase.deepEquals(SerializerTestBase.java:493)
> org.apache.flink.api.common.typeutils.SerializerTestBase.testSerializedCopyIndividually(SerializerTestBase.java:379)
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:498)
> org.apache.flink.api.common.typeutils.SerializerTestInstance.testAll(SerializerTestInstance.java:92)
> org.apache.flink.api.java.typeutils.runtime.AbstractGenericTypeSerializerTest.runTests(AbstractGenericTypeSerializerTest.java:155)
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoGenericTypeSerializerTest.testJavaSet(KryoGenericTypeSerializerTest.java:59)
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:498)
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> org.junit.runners.Suite.runChild(Suite.java:128)
> org.junit.runners.Suite.runChild(Suite.java:27)
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
> 

[GitHub] [flink] flinkbot commented on issue #11181: [FLINK-16230][tests]Use LinkedHashSet instead of HashSet

2020-02-21 Thread GitBox
flinkbot commented on issue #11181: [FLINK-16230][tests]Use LinkedHashSet 
instead of HashSet
URL: https://github.com/apache/flink/pull/11181#issuecomment-589917192
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 1adaf381144df070b76f311652607fe295842863 (Sat Feb 22 
04:29:35 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-16230).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11180: [FLINK-16220][json] Fix JsonRowSerializationSchema cast exception due…

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11180: [FLINK-16220][json] Fix 
JsonRowSerializationSchema cast exception due…
URL: https://github.com/apache/flink/pull/11180#issuecomment-589914664
 
 
   
   ## CI report:
   
   * 56c928ea39dc0c1b9de3e2669d8e48994b208010 Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150126022) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16230) Use LinkedHashMap instead of HashMap for a deterministic order when testing serialization

2020-02-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16230:
---
Labels: pull-request-available  (was: )

> Use LinkedHashMap instead of HashMap for a deterministic order when testing 
> serialization
> -
>
> Key: FLINK-16230
> URL: https://issues.apache.org/jira/browse/FLINK-16230
> Project: Flink
>  Issue Type: Bug
> Environment: TEST: 
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoGenericTypeSerializerTest#testJavaSet
> StackTrace:
> java.util.HashMap$HashIterator$HashIteratorShuffler.
> java.util.HashMap$HashIterator.(HashMap.java:1435)
> java.util.HashMap$KeyIterator.(HashMap.java:1467)
> java.util.HashMap$KeySet.iterator(HashMap.java:917)
> java.util.HashSet.iterator(HashSet.java:173)
> org.apache.flink.testutils.DeeplyEqualsChecker.deepEqualsIterable(DeeplyEqualsChecker.java:107)
> org.apache.flink.testutils.DeeplyEqualsChecker.deepEquals0(DeeplyEqualsChecker.java:94)
> org.apache.flink.testutils.DeeplyEqualsChecker.lambda$deepEquals$0(DeeplyEqualsChecker.java:79)
> java.util.Optional.orElseGet(Optional.java:267)
> org.apache.flink.testutils.DeeplyEqualsChecker.deepEquals(DeeplyEqualsChecker.java:79)
> org.apache.flink.testutils.CustomEqualityMatcher.matches(CustomEqualityMatcher.java:63)
> org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12)
> org.junit.Assert.assertThat(Assert.java:956)
> org.apache.flink.api.common.typeutils.SerializerTestBase.deepEquals(SerializerTestBase.java:493)
> org.apache.flink.api.common.typeutils.SerializerTestBase.testSerializedCopyIndividually(SerializerTestBase.java:379)
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:498)
> org.apache.flink.api.common.typeutils.SerializerTestInstance.testAll(SerializerTestInstance.java:92)
> org.apache.flink.api.java.typeutils.runtime.AbstractGenericTypeSerializerTest.runTests(AbstractGenericTypeSerializerTest.java:155)
> org.apache.flink.api.java.typeutils.runtime.kryo.KryoGenericTypeSerializerTest.testJavaSet(KryoGenericTypeSerializerTest.java:59)
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> java.lang.reflect.Method.invoke(Method.java:498)
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> org.junit.runners.Suite.runChild(Suite.java:128)
> org.junit.runners.Suite.runChild(Suite.java:27)
> org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
> org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
> org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
> org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
> org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
> org.junit.runners.ParentRunner.run(ParentRunner.java:363)
> org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
> org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
> org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
> 

[GitHub] [flink] cpugputpu opened a new pull request #11181: [FLINK-16230][tests]Use LinkedHashSet instead of HashSet

2020-02-21 Thread GitBox
cpugputpu opened a new pull request #11181: [FLINK-16230][tests]Use 
LinkedHashSet instead of HashSet
URL: https://github.com/apache/flink/pull/11181
 
 
   For background information, please refer to 
https://issues.apache.org/jira/browse/FLINK-16230
   
   ## What is the purpose of the change
   The fix for the test is to use LinkedHashSet instead of HashSet so that the 
iteration order will be deterministic across different JVM versions or vendors, 
thus making the test more stable.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16230) Use LinkedHashMap instead of HashMap for a deterministic order when testing serialization

2020-02-21 Thread cpugputpu (Jira)
cpugputpu created FLINK-16230:
-

 Summary: Use LinkedHashMap instead of HashMap for a deterministic 
order when testing serialization
 Key: FLINK-16230
 URL: https://issues.apache.org/jira/browse/FLINK-16230
 Project: Flink
  Issue Type: Bug
 Environment: TEST: 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoGenericTypeSerializerTest#testJavaSet

StackTrace:

java.util.HashMap$HashIterator$HashIteratorShuffler.
java.util.HashMap$HashIterator.(HashMap.java:1435)
java.util.HashMap$KeyIterator.(HashMap.java:1467)
java.util.HashMap$KeySet.iterator(HashMap.java:917)
java.util.HashSet.iterator(HashSet.java:173)
org.apache.flink.testutils.DeeplyEqualsChecker.deepEqualsIterable(DeeplyEqualsChecker.java:107)
org.apache.flink.testutils.DeeplyEqualsChecker.deepEquals0(DeeplyEqualsChecker.java:94)
org.apache.flink.testutils.DeeplyEqualsChecker.lambda$deepEquals$0(DeeplyEqualsChecker.java:79)
java.util.Optional.orElseGet(Optional.java:267)
org.apache.flink.testutils.DeeplyEqualsChecker.deepEquals(DeeplyEqualsChecker.java:79)
org.apache.flink.testutils.CustomEqualityMatcher.matches(CustomEqualityMatcher.java:63)
org.hamcrest.MatcherAssert.assertThat(MatcherAssert.java:12)
org.junit.Assert.assertThat(Assert.java:956)
org.apache.flink.api.common.typeutils.SerializerTestBase.deepEquals(SerializerTestBase.java:493)
org.apache.flink.api.common.typeutils.SerializerTestBase.testSerializedCopyIndividually(SerializerTestBase.java:379)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.flink.api.common.typeutils.SerializerTestInstance.testAll(SerializerTestInstance.java:92)
org.apache.flink.api.java.typeutils.runtime.AbstractGenericTypeSerializerTest.runTests(AbstractGenericTypeSerializerTest.java:155)
org.apache.flink.api.java.typeutils.runtime.kryo.KryoGenericTypeSerializerTest.testJavaSet(KryoGenericTypeSerializerTest.java:59)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
org.junit.runners.Suite.runChild(Suite.java:128)
org.junit.runners.Suite.runChild(Suite.java:27)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
org.apache.maven.surefire.junitcore.JUnitCore.run(JUnitCore.java:55)
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.createRequestAndRun(JUnitCoreWrapper.java:137)
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.executeEager(JUnitCoreWrapper.java:107)
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:83)
org.apache.maven.surefire.junitcore.JUnitCoreWrapper.execute(JUnitCoreWrapper.java:75)
org.apache.maven.surefire.junitcore.JUnitCoreProvider.invoke(JUnitCoreProvider.java:158)
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
Reporter: cpugputpu


The test in 
org.apache.flink.api.java.typeutils.runtime.kryo.KryoGenericTypeSerializerTest.testJavaSet(KryoGenericTypeSerializerTest.java:59)
 may fail due to a different iteration order of HashSet. The test aims to check 
the 

[jira] [Commented] (FLINK-16089) Translate "Data Types" page of "Table API & SQL" into Chinese

2020-02-21 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042361#comment-17042361
 ] 

Jark Wu commented on FLINK-16089:
-

Sure [~gauss1314], you can learn the contribution guide from 
https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications
 and https://flink.apache.org/zh/contributing/contribute-documentation.html 

> Translate "Data Types" page of "Table API & SQL" into Chinese
> -
>
> Key: FLINK-16089
> URL: https://issues.apache.org/jira/browse/FLINK-16089
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/types.html
> The markdown file is located in {{flink/docs/dev/table/types.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16089) Translate "Data Types" page of "Table API & SQL" into Chinese

2020-02-21 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu reassigned FLINK-16089:
---

Assignee: Jiang Leilei

> Translate "Data Types" page of "Table API & SQL" into Chinese
> -
>
> Key: FLINK-16089
> URL: https://issues.apache.org/jira/browse/FLINK-16089
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Jark Wu
>Assignee: Jiang Leilei
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/types.html
> The markdown file is located in {{flink/docs/dev/table/types.zh.md}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11180: [FLINK-16220][json] Fix JsonRowSerializationSchema cast exception due…

2020-02-21 Thread GitBox
flinkbot commented on issue #11180: [FLINK-16220][json] Fix 
JsonRowSerializationSchema cast exception due…
URL: https://github.com/apache/flink/pull/11180#issuecomment-589914664
 
 
   
   ## CI report:
   
   * 56c928ea39dc0c1b9de3e2669d8e48994b208010 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Resolved] (FLINK-15912) Add Context to improve TableSourceFactory and TableSinkFactory

2020-02-21 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu resolved FLINK-15912.
-
Resolution: Fixed

[FLINK-15912][table] Clean TableFactoryUtil
 - master(1.11.0): 052eb7722325cf1ef91ff5b898c30996e688eabe

[FLINK-15912][table] Support create table source/sink by context in hive 
connector
 - master(1.11.0): 4e92bdb186e6abc1d5e033ebfb4978e94af20cc7

[FLINK-15912][table-planner] Support create table source/sink by context in 
legacy planner
 - master(1.11.0): e280ffc8db37697104809c9fed8b0ed2a850372c

[FLINK-15912][table-planner-blink] Support create table source/sink by context 
in blink planner
 - master(1.11.0): 69d8816d164a106f8edf61a768569dafa5b0dc8d

[FLINK-15912][table] Support create table source/sink by context in sql-cli
 - master(1.11.0): 306a89a3556ca3fbab0306301f56972ccf11641b

[FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory
 - master(1.11.0): f6895da4f762e506c4d0fa8d6dea427094e84173


> Add Context to improve TableSourceFactory and TableSinkFactory
> ---
>
> Key: FLINK-15912
> URL: https://issues.apache.org/jira/browse/FLINK-15912
> Project: Flink
>  Issue Type: Improvement
>  Components: Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Discussion in: 
> [http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Improve-TableFactory-td36647.html]
> Vote in: 
> [http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Improve-TableFactory-to-add-Context-td37211.html]
> Motivation:
> Now the main needs and problems are:
>  * Connector can't get TableConfig[1], and some behaviors really need to be
> controlled by the user's table configuration. In the era of catalog, we
> can't put these config in connector properties, which is too inconvenient.
>  * A context class also allows for future modifications without touching the 
> TableFactory interface again.
> Interface:
> {code:java}
>   public interface TableSourceFactory extends TableFactory {
>    ..
>    /**
>     * Creates and configures a {@link TableSource} based on the given
> {@link Context}.
>     *
>     * @param context context of this table source.
>     * @return the configured table source.
>     */
>    default TableSource createTableSource(Context context) {
>       return createTableSource(
>             context.getObjectIdentifier().toObjectPath(),
>             context.getTable());
>    }
>    /**
>     * Context of table source creation. Contains table information and
> environment information.
>     */
>    interface Context {
>       /**
>        * @return full identifier of the given {@link CatalogTable}.
>        */
>       ObjectIdentifier getObjectIdentifier();
>       /**
>        * @return table {@link CatalogTable} instance.
>        */
>       CatalogTable getTable();
>       /**
>        * @return readable config of this table environment.
>        */
>       ReadableConfig getConfiguration();
>    }
> }
> public interface TableSinkFactory extends TableFactory {
>    ..
>    /**
>     * Creates and configures a {@link TableSink} based on the given
> {@link Context}.
>     *
>     * @param context context of this table sink.
>     * @return the configured table sink.
>     */
>    default TableSink createTableSink(Context context) {
>       return createTableSink(
>             context.getObjectIdentifier().toObjectPath(),
>             context.getTable());
>    }
>    /**
>     * Context of table sink creation. Contains table information and
> environment information.
>     */
>    interface Context {
>       /**
>        * @return full identifier of the given {@link CatalogTable}.
>        */
>       ObjectIdentifier getObjectIdentifier();
>       /**
>        * @return table {@link CatalogTable} instance.
>        */
>       CatalogTable getTable();
>       /**
>        * @return readable config of this table environment.
>        */
>       ReadableConfig getConfiguration();
>    }
> }
> {code}
> Add inner class into TableSourceFactory and TableSinkFactory, the reason they 
> are defined repeatedly is that source and sink may need different properties 
> in the future.
>  
> [1] https://issues.apache.org/jira/browse/FLINK-15290



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong closed pull request #11047: [FLINK-15912][table] Add Context to TableSourceFactory and TableSinkFactory

2020-02-21 Thread GitBox
wuchong closed pull request #11047: [FLINK-15912][table] Add Context to 
TableSourceFactory and TableSinkFactory
URL: https://github.com/apache/flink/pull/11047
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (FLINK-13795) Web UI logs errors when selecting Checkpoint Tab for Batch Jobs

2020-02-21 Thread begginghard (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042353#comment-17042353
 ] 

begginghard commented on FLINK-13795:
-

Hi,[~chesnay],finally, how do you fix this problem?

 

> Web UI logs errors when selecting Checkpoint Tab for Batch Jobs
> ---
>
> Key: FLINK-13795
> URL: https://issues.apache.org/jira/browse/FLINK-13795
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / REST
>Affects Versions: 1.9.0
>Reporter: Stephan Ewen
>Priority: Major
>
> The logs of the REST endpoint print errors if you run a batch job and then 
> select the "Checkpoints" tab.
> I would expect that this simply shows "no checkpoints available for this job" 
> and not that an {{ERROR}} level entry appears in the log.
> {code}
> 2019-08-20 12:04:54,195 ERROR 
> org.apache.flink.runtime.rest.handler.job.checkpoints.CheckpointingStatisticsHandler
>   - Exception occurred in REST handler: Checkpointing has not been enabled.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16025) Service could expose blob server port mismatched with JM Container

2020-02-21 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-16025:
-

Assignee: Canbin Zheng

> Service could expose blob server port mismatched with JM Container
> --
>
> Key: FLINK-16025
> URL: https://issues.apache.org/jira/browse/FLINK-16025
> Project: Flink
>  Issue Type: Bug
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0
>Reporter: Canbin Zheng
>Assignee: Canbin Zheng
>Priority: Critical
> Fix For: 1.10.1, 1.11.0
>
>
> The Service would always expose 6124 port if it should expose that port, and 
> while building ServicePort we do not explicitly specify a target port, so the 
> target port would always be 6124 too.
> {code:java}
> // From ServiceDecorator.java
> servicePorts.add(getServicePort(
>  getPortName(BlobServerOptions.PORT.key()),
>  Constants.BLOB_SERVER_PORT));
> private ServicePort getServicePort(String name, int port) {
>return new ServicePortBuilder()
>   .withName(name)
>   .withPort(port)
>   .build();
> }
> {code}
>  
> meanwhile, the Container of the JM would expose the blob server port which is 
> configured in the Flink Configuration,
> {code:java}
> // From FlinkMasterDeploymentDecorator.java
> final int blobServerPort = KubernetesUtils.parsePort(flinkConfig, 
> BlobServerOptions.PORT);
> ...
> final Container container = createJobManagerContainer(flinkConfig, mainClass, 
> hasLogback, hasLog4j, blobServerPort);
> {code}
>  
> so there is a risk that in non-HA mode the TM could not execute Task due to 
> dependencies fetching failure if the Service exposes a blob server port which 
> is different from the JM Container when one configures the blob server port 
> with a value different from 6124.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16194) Refactor the Kubernetes decorator design

2020-02-21 Thread Canbin Zheng (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Canbin Zheng updated FLINK-16194:
-
Summary: Refactor the Kubernetes decorator design  (was: Refactor the 
Kubernetes architecture design)

> Refactor the Kubernetes decorator design
> 
>
> Key: FLINK-16194
> URL: https://issues.apache.org/jira/browse/FLINK-16194
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / Kubernetes
>Affects Versions: 1.10.0
>Reporter: Canbin Zheng
>Priority: Critical
> Fix For: 1.11.0
>
>
> So far, Flink has made efforts for the native integration of Kubernetes. 
> However, it is always essential to evaluate the existing design and consider 
> alternatives that have better design and are easier to maintain in the long 
> run. We have suffered from some problems while developing new features base 
> on the current code. Here is some of them:
>  # We don’t have a unified monadic-step based orchestrator architecture to 
> construct all the Kubernetes resources.
>  ** There are inconsistencies between the orchestrator architecture that 
> client uses to create the Kubernetes resources, and the orchestrator 
> architecture that the master uses to create Pods; this confuses new 
> contributors, as there is a cognitive burden to understand two architectural 
> philosophies instead of one; for another, maintenance and new feature 
> development become quite challenging.
>  ** Pod construction is done in one step. With the introduction of new 
> features for the Pod, the construction process could become far more 
> complicated, and the functionality of a single class could explode, which 
> hurts code readability, writability, and testability. At the moment, we have 
> encountered such challenges and realized that it is not an easy thing to 
> develop new features related to the Pod.
>  ** The implementations of a specific feature are usually scattered in 
> multiple decoration classes. For example, the current design uses a 
> decoration class chain that contains five Decorator class to mount a 
> configuration file to the Pod. If people would like to introduce other 
> configuration files support, such as Hadoop configuration or Keytab files, 
> they have no choice but to repeat the same tedious and scattered process.
>  # We don’t have dedicated objects or tools for centrally parsing, verifying, 
> and managing the Kubernetes parameters, which has raised some maintenance and 
> inconsistency issues.
>  ** There are many duplicated parsing and validating code, including settings 
> of Image, ImagePullPolicy, ClusterID, ConfDir, Labels, etc. It not only harms 
> readability and testability but also is prone to mistakes. Refer to issue 
> FLINK-16025 for inconsistent parsing of the same parameter.
>  ** The parameters are scattered so that some of the method signatures have 
> to declare many unnecessary input parameters, such as 
> FlinkMasterDeploymentDecorator#createJobManagerContainer.
>  
> For solving these issues, we propose to 
>  # Introduce a unified monadic-step based orchestrator architecture that has 
> a better, cleaner and consistent abstraction for the Kubernetes resources 
> construction process. 
>  # Add some dedicated tools for centrally parsing, verifying, and managing 
> the Kubernetes parameters.
>  
> Refer to the design doc for the details, any feedback is welcome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on issue #11180: [FLINK-16220][json] Fix JsonRowSerializationSchema cast exception due…

2020-02-21 Thread GitBox
flinkbot commented on issue #11180: [FLINK-16220][json] Fix 
JsonRowSerializationSchema cast exception due…
URL: https://github.com/apache/flink/pull/11180#issuecomment-589910357
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 56c928ea39dc0c1b9de3e2669d8e48994b208010 (Sat Feb 22 
02:55:54 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-16220).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16220) JsonRowSerializationSchema throws cast exception : NullNode cannot be cast to ArrayNode

2020-02-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16220:
---
Labels: pull-request-available  (was: )

> JsonRowSerializationSchema throws cast exception : NullNode cannot be cast to 
> ArrayNode
> ---
>
> Key: FLINK-16220
> URL: https://issues.apache.org/jira/browse/FLINK-16220
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Reporter: Benchao Li
>Priority: Major
>  Labels: pull-request-available
>
> It's because the object reuse. For the below schema:
> {code:java}
> create table sink {
>   col1 int,
>   col2 array
> }{code}
> if col2 is null, then the reused object will be {{NullNode}}. for the next 
> record, if it's not null, we will cast the reused object {{NullNode}} to 
> {{ArrayNode}}, which will throw cast exception.
>  
> cc [~jark] [~twalthr] 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] libenchao opened a new pull request #11180: [FLINK-16220][json] Fix JsonRowSerializationSchema cast exception due…

2020-02-21 Thread GitBox
libenchao opened a new pull request #11180: [FLINK-16220][json] Fix 
JsonRowSerializationSchema cast exception due…
URL: https://github.com/apache/flink/pull/11180
 
 
   … to object reuse
   
   
   
   ## What is the purpose of the change
   
   Fix JsonRowSerializationSchema cast exception.
   
   ## Brief change log
   
   Add instantof before cast to avoid nested array/map/row cast exception.
   
   ## Verifying this change
   
   This change added tests and can be verified as follows:
   
   - JsonRowSerializationSchemaTest.testNestedSchema()
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / **no**)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / **no**)
 - The serializers: (yes / **no** / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / **no** 
/ don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / **no** / 
don't know)
 - The S3 file system connector: (yes / **no** / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / **no**)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] zgq25302111 commented on issue #11114: FLINK-16105 Translate "User-defined Sources & Sinks" page of "Table API & SQL" into Chinese

2020-02-21 Thread GitBox
zgq25302111 commented on issue #4: FLINK-16105 Translate "User-defined 
Sources & Sinks" page of "Table API & SQL" into Chinese
URL: https://github.com/apache/flink/pull/4#issuecomment-589909797
 
 
   > Hi @zgq25302111 , it seems that the content is not translated. Is this PR 
a mistake?
   
   Sorry. It is a mistake.
   I will participate in the translation gradually. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Assigned] (FLINK-16206) Support JSON_ARRAYAGG for blink planner

2020-02-21 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-16206:
-

Assignee: Forward Xu

> Support JSON_ARRAYAGG for blink planner
> ---
>
> Key: FLINK-16206
> URL: https://issues.apache.org/jira/browse/FLINK-16206
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Zili Chen
>Assignee: Forward Xu
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16204) Support JSON_ARRAY for blink planner

2020-02-21 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-16204:
-

Assignee: Forward Xu

> Support JSON_ARRAY for blink planner
> 
>
> Key: FLINK-16204
> URL: https://issues.apache.org/jira/browse/FLINK-16204
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Zili Chen
>Assignee: Forward Xu
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16205) Support JSON_OBJECTAGG for blink planner

2020-02-21 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-16205:
-

Assignee: Forward Xu

> Support JSON_OBJECTAGG for blink planner
> 
>
> Key: FLINK-16205
> URL: https://issues.apache.org/jira/browse/FLINK-16205
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Zili Chen
>Assignee: Forward Xu
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16203) Support JSON_OBJECT for blink planner

2020-02-21 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-16203:
-

Assignee: Forward Xu

> Support JSON_OBJECT for blink planner
> -
>
> Key: FLINK-16203
> URL: https://issues.apache.org/jira/browse/FLINK-16203
> Project: Flink
>  Issue Type: Sub-task
>Reporter: Zili Chen
>Assignee: Forward Xu
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-9477) FLIP-90: Support SQL 2016 JSON functions in Flink SQL

2020-02-21 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-9477:


Assignee: Forward Xu  (was: Shuyi Chen)

> FLIP-90: Support SQL 2016 JSON functions in Flink SQL
> -
>
> Key: FLINK-9477
> URL: https://issues.apache.org/jira/browse/FLINK-9477
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Shuyi Chen
>Assignee: Forward Xu
>Priority: Major
> Fix For: 1.11.0
>
>
> FLIP Link 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=141724550



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16201) Support JSON_VALUE for blink planner

2020-02-21 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-16201:
-

Assignee: Forward Xu

> Support JSON_VALUE for blink planner
> 
>
> Key: FLINK-16201
> URL: https://issues.apache.org/jira/browse/FLINK-16201
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Zili Chen
>Assignee: Forward Xu
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16202) Support JSON_QUERY for blink planner

2020-02-21 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-16202:
-

Assignee: Forward Xu

> Support JSON_QUERY for blink planner
> 
>
> Key: FLINK-16202
> URL: https://issues.apache.org/jira/browse/FLINK-16202
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Zili Chen
>Assignee: Forward Xu
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-9477) FLIP-90: Support SQL 2016 JSON functions in Flink SQL

2020-02-21 Thread Zili Chen (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042345#comment-17042345
 ] 

Zili Chen commented on FLINK-9477:
--

[~x1q1j1] sure given your the author of the detailed FLIP-90. I've assigned the 
issues to you.

> FLIP-90: Support SQL 2016 JSON functions in Flink SQL
> -
>
> Key: FLINK-9477
> URL: https://issues.apache.org/jira/browse/FLINK-9477
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
> Fix For: 1.11.0
>
>
> FLIP Link 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=141724550



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-16200) Support JSON_EXISTS for blink planner

2020-02-21 Thread Zili Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zili Chen reassigned FLINK-16200:
-

Assignee: Forward Xu

> Support JSON_EXISTS for blink planner
> -
>
> Key: FLINK-16200
> URL: https://issues.apache.org/jira/browse/FLINK-16200
> Project: Flink
>  Issue Type: Sub-task
>  Components: Table SQL / Planner
>Reporter: Zili Chen
>Assignee: Forward Xu
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-9477) FLIP-90: Support SQL 2016 JSON functions in Flink SQL

2020-02-21 Thread Forward Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042343#comment-17042343
 ] 

Forward Xu commented on FLINK-9477:
---

hi [~jark] [~tison] can you agsin to me?

 

> FLIP-90: Support SQL 2016 JSON functions in Flink SQL
> -
>
> Key: FLINK-9477
> URL: https://issues.apache.org/jira/browse/FLINK-9477
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Shuyi Chen
>Assignee: Shuyi Chen
>Priority: Major
> Fix For: 1.11.0
>
>
> FLIP Link 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=141724550



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] tzulitai commented on a change in pull request #30: [FLINK-16226] Add Backpressure to HttpFunction

2020-02-21 Thread GitBox
tzulitai commented on a change in pull request #30: [FLINK-16226] Add 
Backpressure to HttpFunction
URL: https://github.com/apache/flink-statefun/pull/30#discussion_r382874440
 
 

 ##
 File path: 
statefun-flink/statefun-flink-core/src/main/java/org/apache/flink/statefun/flink/core/httpfn/HttpFunction.java
 ##
 @@ -89,12 +101,28 @@ public void invoke(Context context, Object input) {
 
   private void onRequest(Context context, Any message) {
 Invocation.Builder invocationBuilder = singeInvocationBuilder(context, 
message);
-if (hasInFlightRpc.getOrDefault(Boolean.FALSE)) {
-  batch.append(invocationBuilder.build());
+int inflightOrBatched = requestState.getOrDefault(-1);
+if (inflightOrBatched < 0) {
+  // no inflight requests, and nothing in the batch.
+  // so we let this request to go through, and change state to indicate 
that:
+  // a) there is a request in flight.
+  // b) there is nothing in the batch.
+  requestState.set(0);
+  sendToFunction(context, invocationBuilder);
   return;
 }
-hasInFlightRpc.set(Boolean.TRUE);
-sendToFunction(context, invocationBuilder);
+// there is at least one request in flight (inflightOrBatched >= 0),
+// so we add that request to the batch.
+batch.append(invocationBuilder.build());
+inflightOrBatched++;
+requestState.set(inflightOrBatched);
+if (isMaxBatchSizeExceeded(inflightOrBatched)) {
+  // we are at capacity, can't add anything to the batch.
+  // we need to signal to the runtime that we are unable to process any 
new input
+  // and we must wait for our in flight asynchronous operation to complete 
before
+  // we are able to process more input.
+  ((AsyncWaiter) context).awaitAsyncOperationComplete();
 
 Review comment:
   Would it make sense to add a `AsyncWaiter` argument to the `onRequest` 
method, and do the casting on the call site? Just makes this a little bit more 
readable.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai commented on a change in pull request #30: [FLINK-16226] Add Backpressure to HttpFunction

2020-02-21 Thread GitBox
tzulitai commented on a change in pull request #30: [FLINK-16226] Add 
Backpressure to HttpFunction
URL: https://github.com/apache/flink-statefun/pull/30#discussion_r382876277
 
 

 ##
 File path: 
statefun-flink/statefun-flink-core/src/main/java/org/apache/flink/statefun/flink/core/httpfn/HttpFunction.java
 ##
 @@ -109,9 +137,17 @@ private void onAsyncResult(
 handleInvocationResponse(context, invocationResult);
 InvocationBatchRequest.Builder nextBatch = getNextBatch();
 if (nextBatch == null) {
-  hasInFlightRpc.clear();
+  // the async request was completed, and there is nothing else in the 
batch
+  // so we clear the requestState.
+  requestState.clear();
   return;
 }
+// an async request was just completed, but while it was in flight we have
+// accumulated a batch, we now proceed with:
+// a) clearing the batch from our own persisted state (the batch moves to 
the async operation
+// state)
+// b) sending the accumulated batch to the remote function.
+requestState.set(0);
 
 Review comment:
   If that's the case, then probably `maxBatchSize` should be renamed to 
`maxPendingSize`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai commented on a change in pull request #30: [FLINK-16226] Add Backpressure to HttpFunction

2020-02-21 Thread GitBox
tzulitai commented on a change in pull request #30: [FLINK-16226] Add 
Backpressure to HttpFunction
URL: https://github.com/apache/flink-statefun/pull/30#discussion_r382876533
 
 

 ##
 File path: 
statefun-flink/statefun-flink-core/src/main/java/org/apache/flink/statefun/flink/core/httpfn/HttpFunction.java
 ##
 @@ -109,9 +137,17 @@ private void onAsyncResult(
 handleInvocationResponse(context, invocationResult);
 InvocationBatchRequest.Builder nextBatch = getNextBatch();
 if (nextBatch == null) {
-  hasInFlightRpc.clear();
+  // the async request was completed, and there is nothing else in the 
batch
+  // so we clear the requestState.
+  requestState.clear();
   return;
 }
+// an async request was just completed, but while it was in flight we have
+// accumulated a batch, we now proceed with:
+// a) clearing the batch from our own persisted state (the batch moves to 
the async operation
+// state)
+// b) sending the accumulated batch to the remote function.
+requestState.set(0);
 
 Review comment:
   On the other hand, if the intended semantics is that we only backpressure 
when local accumulated batch exceeds threshold, and not accounting number of 
in-flights (records, not requests; a batch request would have multiple 
records), then probably `inflightOrBatched` should be renamed to `batchedSize`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai commented on a change in pull request #30: [FLINK-16226] Add Backpressure to HttpFunction

2020-02-21 Thread GitBox
tzulitai commented on a change in pull request #30: [FLINK-16226] Add 
Backpressure to HttpFunction
URL: https://github.com/apache/flink-statefun/pull/30#discussion_r382876194
 
 

 ##
 File path: 
statefun-flink/statefun-flink-core/src/main/java/org/apache/flink/statefun/flink/core/httpfn/HttpFunction.java
 ##
 @@ -109,9 +137,17 @@ private void onAsyncResult(
 handleInvocationResponse(context, invocationResult);
 InvocationBatchRequest.Builder nextBatch = getNextBatch();
 if (nextBatch == null) {
-  hasInFlightRpc.clear();
+  // the async request was completed, and there is nothing else in the 
batch
+  // so we clear the requestState.
+  requestState.clear();
   return;
 }
+// an async request was just completed, but while it was in flight we have
+// accumulated a batch, we now proceed with:
+// a) clearing the batch from our own persisted state (the batch moves to 
the async operation
+// state)
+// b) sending the accumulated batch to the remote function.
+requestState.set(0);
 
 Review comment:
   Should this actually be `requestState.set(nextBatch.size())`?
   Or rather, we don't set it to 0 because at this point request state is 
already == accumulated batch size.
   
   Reasoning is:
   In `onRequest` method, as I understood it from the naming, the 
`inflightOrBatched` (counter obtained from `requestState`) reflects total 
number of "pending" records, regardless of whether they are buffered or 
in-flight.
   Setting `requestState` to 0 here breaks that semantic.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink-statefun] tzulitai commented on a change in pull request #30: [FLINK-16226] Add Backpressure to HttpFunction

2020-02-21 Thread GitBox
tzulitai commented on a change in pull request #30: [FLINK-16226] Add 
Backpressure to HttpFunction
URL: https://github.com/apache/flink-statefun/pull/30#discussion_r382876852
 
 

 ##
 File path: 
statefun-flink/statefun-flink-core/src/main/java/org/apache/flink/statefun/flink/core/httpfn/HttpFunction.java
 ##
 @@ -109,9 +137,17 @@ private void onAsyncResult(
 handleInvocationResponse(context, invocationResult);
 InvocationBatchRequest.Builder nextBatch = getNextBatch();
 if (nextBatch == null) {
-  hasInFlightRpc.clear();
+  // the async request was completed, and there is nothing else in the 
batch
+  // so we clear the requestState.
+  requestState.clear();
   return;
 }
+// an async request was just completed, but while it was in flight we have
+// accumulated a batch, we now proceed with:
+// a) clearing the batch from our own persisted state (the batch moves to 
the async operation
+// state)
+// b) sending the accumulated batch to the remote function.
+requestState.set(0);
 
 Review comment:
   So just to sum up  
   I'm trying to clarify the intended backpressuring semantics w.r.t. some of 
the variable namings here.
   There seems to be some inconsistency in the naming?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11178: [hotfix] Stabilize python tests

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11178: [hotfix] Stabilize python tests
URL: https://github.com/apache/flink/pull/11178#issuecomment-589751914
 
 
   
   ## CI report:
   
   * b2e002eb9b0f340f4efd6e79b22c19d9b588993f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150066458) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5436)
 
   * 5e4b742577824a69966ee284b055495bbc17644a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150116897) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16229) testCompressionOnRelativePath() fails due to Files.createDirectory() in macOS

2020-02-21 Thread Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu updated FLINK-16229:

Description: 
      I am using flink 1.10.0. In macOS, cd flink/flink-core  and execute "mvn 
-Dtest=org.apache.flink.util.FileUtilsTest#testCompressionOnRelativePath test". 
It reports the following error.

      In linux, the test is ok. So I think that Files.createDirectory(relative 
path)  can not work well in mac. Should flink ignore this test or something 
better to do?

java.nio.file.NoSuchFileException: 
../../../../../var/folders/7h/3lhbyjl15m93hz9vpx303jvhgn/T/junit2476210278718567929/compressDir/rootDir
 at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
 at java.nio.file.Files.createDirectory(Files.java:674) at 
org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:440)
 at 
org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:261)

  was:
      I am using flink 1.10.0. In macOS, cd flink/flink-core  and execute "mvn 
-Dtest=org.apache.flink.util.FileUtilsTest#testCompressionOnRelativePath test". 
It reports the following error.

      In linux, the test is ok. So I think that Files.createDirectory()  can 
not work well in mac. Should flink ignore this test or something better to do?

java.nio.file.NoSuchFileException: 
../../../../../var/folders/7h/3lhbyjl15m93hz9vpx303jvhgn/T/junit2476210278718567929/compressDir/rootDir
 at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
 at java.nio.file.Files.createDirectory(Files.java:674) at 
org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:440)
 at 
org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:261)


> testCompressionOnRelativePath() fails due to Files.createDirectory() in macOS
> -
>
> Key: FLINK-16229
> URL: https://issues.apache.org/jira/browse/FLINK-16229
> Project: Flink
>  Issue Type: Bug
>Reporter: Liu
>Priority: Minor
>
>       I am using flink 1.10.0. In macOS, cd flink/flink-core  and execute 
> "mvn -Dtest=org.apache.flink.util.FileUtilsTest#testCompressionOnRelativePath 
> test". It reports the following error.
>       In linux, the test is ok. So I think that 
> Files.createDirectory(relative path)  can not work well in mac. Should flink 
> ignore this test or something better to do?
> java.nio.file.NoSuchFileException: 
> ../../../../../var/folders/7h/3lhbyjl15m93hz9vpx303jvhgn/T/junit2476210278718567929/compressDir/rootDir
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  at java.nio.file.Files.createDirectory(Files.java:674) at 
> org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:440)
>  at 
> org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:261)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-16229) testCompressionOnRelativePath() fails due to Files.createDirectory() in macOS

2020-02-21 Thread Liu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liu updated FLINK-16229:

Description: 
      I am using flink 1.10.0. In macOS, cd flink/flink-core  and execute "mvn 
-Dtest=org.apache.flink.util.FileUtilsTest#testCompressionOnRelativePath test". 
It reports the following error.

      In linux, the test is ok. So I think that Files.createDirectory()  can 
not work well in mac. Should flink ignore this test or something better to do?

java.nio.file.NoSuchFileException: 
../../../../../var/folders/7h/3lhbyjl15m93hz9vpx303jvhgn/T/junit2476210278718567929/compressDir/rootDir
 at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
 at java.nio.file.Files.createDirectory(Files.java:674) at 
org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:440)
 at 
org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:261)

  was:
      I am using flink 1.10.0. In macOS, cd flink/flink-core  and execute "mvn 
-Dtest=org.apache.flink.util.FileUtilsTest#testCompressionOnRelativePath test". 
It reports the following error.

      In linux, the test is ok. So I think that Files.createDirectory()  can 
not work well in mac. Should flink ignore this test or something better to do?

 

java.nio.file.NoSuchFileException: 
../../../../../var/folders/7h/3lhbyjl15m93hz9vpx303jvhgn/T/junit761366460676035615/compressDir/rootDirjava.nio.file.NoSuchFileException:
 
../../../../../var/folders/7h/3lhbyjl15m93hz9vpx303jvhgn/T/junit761366460676035615/compressDir/rootDir
 at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
 at java.nio.file.Files.createDirectory(Files.java:674) at 
org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:445)
 at 
org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:265)


> testCompressionOnRelativePath() fails due to Files.createDirectory() in macOS
> -
>
> Key: FLINK-16229
> URL: https://issues.apache.org/jira/browse/FLINK-16229
> Project: Flink
>  Issue Type: Bug
>Reporter: Liu
>Priority: Minor
>
>       I am using flink 1.10.0. In macOS, cd flink/flink-core  and execute 
> "mvn -Dtest=org.apache.flink.util.FileUtilsTest#testCompressionOnRelativePath 
> test". It reports the following error.
>       In linux, the test is ok. So I think that Files.createDirectory()  can 
> not work well in mac. Should flink ignore this test or something better to do?
> java.nio.file.NoSuchFileException: 
> ../../../../../var/folders/7h/3lhbyjl15m93hz9vpx303jvhgn/T/junit2476210278718567929/compressDir/rootDir
>  at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
> sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
>  at java.nio.file.Files.createDirectory(Files.java:674) at 
> org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:440)
>  at 
> org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:261)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-16229) testCompressionOnRelativePath() fails due to Files.createDirectory() in macOS

2020-02-21 Thread Liu (Jira)
Liu created FLINK-16229:
---

 Summary: testCompressionOnRelativePath() fails due to 
Files.createDirectory() in macOS
 Key: FLINK-16229
 URL: https://issues.apache.org/jira/browse/FLINK-16229
 Project: Flink
  Issue Type: Bug
Reporter: Liu


      I am using flink 1.10.0. In macOS, cd flink/flink-core  and execute "mvn 
-Dtest=org.apache.flink.util.FileUtilsTest#testCompressionOnRelativePath test". 
It reports the following error.

      In linux, the test is ok. So I think that Files.createDirectory()  can 
not work well in mac. Should flink ignore this test or something better to do?

 

java.nio.file.NoSuchFileException: 
../../../../../var/folders/7h/3lhbyjl15m93hz9vpx303jvhgn/T/junit761366460676035615/compressDir/rootDirjava.nio.file.NoSuchFileException:
 
../../../../../var/folders/7h/3lhbyjl15m93hz9vpx303jvhgn/T/junit761366460676035615/compressDir/rootDir
 at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
sun.nio.fs.UnixFileSystemProvider.createDirectory(UnixFileSystemProvider.java:384)
 at java.nio.file.Files.createDirectory(Files.java:674) at 
org.apache.flink.util.FileUtilsTest.verifyDirectoryCompression(FileUtilsTest.java:445)
 at 
org.apache.flink.util.FileUtilsTest.testCompressionOnRelativePath(FileUtilsTest.java:265)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] libenchao commented on a change in pull request #11172: [FLINK-16067][table] Forward Calcite exception when parsing a sql query

2020-02-21 Thread GitBox
libenchao commented on a change in pull request #11172: [FLINK-16067][table] 
Forward Calcite exception when parsing a sql query
URL: https://github.com/apache/flink/pull/11172#discussion_r382872573
 
 

 ##
 File path: 
flink-table/flink-table-planner/src/main/java/org/apache/flink/table/calcite/CalciteParser.java
 ##
 @@ -47,7 +47,7 @@ public SqlNode parse(String sql) {
SqlParser parser = SqlParser.create(sql, config);
return parser.parseStmt();
} catch (SqlParseException e) {
-   throw new SqlParserException("SQL parse failed. " + 
e.getMessage());
+   throw new SqlParserException("SQL parse failed." + 
e.getMessage(), e);
 
 Review comment:
   seems removing a space by mistake


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11178: [hotfix] Stabilize python tests

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11178: [hotfix] Stabilize python tests
URL: https://github.com/apache/flink/pull/11178#issuecomment-589751914
 
 
   
   ## CI report:
   
   * b2e002eb9b0f340f4efd6e79b22c19d9b588993f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150066458) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5436)
 
   * 5e4b742577824a69966ee284b055495bbc17644a Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150116897) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11178: [hotfix] Stabilize python tests

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11178: [hotfix] Stabilize python tests
URL: https://github.com/apache/flink/pull/11178#issuecomment-589751914
 
 
   
   ## CI report:
   
   * b2e002eb9b0f340f4efd6e79b22c19d9b588993f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150066458) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5436)
 
   * 5e4b742577824a69966ee284b055495bbc17644a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Created] (FLINK-16228) test_mesos_wordcount.sh fails

2020-02-21 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-16228:
--

 Summary: test_mesos_wordcount.sh fails
 Key: FLINK-16228
 URL: https://issues.apache.org/jira/browse/FLINK-16228
 Project: Flink
  Issue Type: Bug
  Components: Deployment / Mesos, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


In a recent cron build, the mesos wordcount test failed: 
https://travis-ci.org/apache/flink/jobs/653454544

{code}
2020-02-21 20:37:44,334 INFO  
org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Shutting 
MesosSessionClusterEntrypoint down with application status FAILED. Diagnostics 
java.lang.NoClassDefFoundError: org/apache/hadoop/security/UserGroupInformation
at 
org.apache.flink.runtime.clusterframework.overlays.HadoopUserOverlay$Builder.fromEnvironment(HadoopUserOverlay.java:74)
at 
org.apache.flink.mesos.util.MesosUtils.applyOverlays(MesosUtils.java:152)
at 
org.apache.flink.mesos.util.MesosUtils.createContainerSpec(MesosUtils.java:131)
at 
org.apache.flink.mesos.runtime.clusterframework.MesosResourceManagerFactory.createActiveResourceManager(MesosResourceManagerFactory.java:81)
at 
org.apache.flink.runtime.resourcemanager.ActiveResourceManagerFactory.createResourceManager(ActiveResourceManagerFactory.java:57)
at 
org.apache.flink.runtime.entrypoint.component.DefaultDispatcherResourceManagerComponentFactory.create(DefaultDispatcherResourceManagerComponentFactory.java:170)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:215)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:169)
at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:168)
at 
org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runClusterEntrypoint(ClusterEntrypoint.java:518)
at 
org.apache.flink.mesos.entrypoint.MesosSessionClusterEntrypoint.main(MesosSessionClusterEntrypoint.java:126)
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.security.UserGroupInformation
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:419)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:352)
... 12 more
.
{code}




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-15561) Unify Kerberos credentials checking

2020-02-21 Thread Rong Rong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong updated FLINK-15561:
--
Summary: Unify Kerberos credentials checking  (was: Improve Kerberos 
delegation token login )

> Unify Kerberos credentials checking
> ---
>
> Key: FLINK-15561
> URL: https://issues.apache.org/jira/browse/FLINK-15561
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / YARN
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Inspired by the discussion in 
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Yarn-Kerberos-issue-td31894.html#a31933]
>  
> Currently the security HadoopModule handles delegation token login utilizes 2 
> different code path. 
> Flink needs to to ensure delegation token is also a valid format of 
> credential when launching YARN context. See [1] 
> [https://github.com/apache/flink/blob/master/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java#L484]
>  and [2] 
> [https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java#L146]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-15561) Improve Kerberos delegation token login

2020-02-21 Thread Rong Rong (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-15561?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rong Rong resolved FLINK-15561.
---
Resolution: Fixed

> Improve Kerberos delegation token login 
> 
>
> Key: FLINK-15561
> URL: https://issues.apache.org/jira/browse/FLINK-15561
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / YARN
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Inspired by the discussion in 
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Yarn-Kerberos-issue-td31894.html#a31933]
>  
> Currently the security HadoopModule handles delegation token login utilizes 2 
> different code path. 
> Flink needs to to ensure delegation token is also a valid format of 
> credential when launching YARN context. See [1] 
> [https://github.com/apache/flink/blob/master/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java#L484]
>  and [2] 
> [https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java#L146]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-15561) Improve Kerberos delegation token login

2020-02-21 Thread Rong Rong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15561?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042273#comment-17042273
 ] 

Rong Rong commented on FLINK-15561:
---

fixed:

master: 57c33961a55cff1068345198cb4669d9f1313bf8
release-1.10: 8751e69037d8a9b1756b75eed62a368c3ef29137


 

> Improve Kerberos delegation token login 
> 
>
> Key: FLINK-15561
> URL: https://issues.apache.org/jira/browse/FLINK-15561
> Project: Flink
>  Issue Type: Sub-task
>  Components: Deployment / YARN
>Reporter: Rong Rong
>Assignee: Rong Rong
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.10.1, 1.11.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Inspired by the discussion in 
> [http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Yarn-Kerberos-issue-td31894.html#a31933]
>  
> Currently the security HadoopModule handles delegation token login utilizes 2 
> different code path. 
> Flink needs to to ensure delegation token is also a valid format of 
> credential when launching YARN context. See [1] 
> [https://github.com/apache/flink/blob/master/flink-yarn/src/main/java/org/apache/flink/yarn/YarnClusterDescriptor.java#L484]
>  and [2] 
> [https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/security/modules/HadoopModule.java#L146]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16227) Streaming bucketing end-to-end test / test_streaming_bucketing.sh unstable

2020-02-21 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042272#comment-17042272
 ] 

Robert Metzger commented on FLINK-16227:


It seems that the script did not pick up the job id, and thus waits 
indefinitely for the checkpoints to complete.

> Streaming bucketing end-to-end test / test_streaming_bucketing.sh unstable
> --
>
> Key: FLINK-16227
> URL: https://issues.apache.org/jira/browse/FLINK-16227
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Major
>  Labels: test-stability
>
> This nightly cron job has failed: 
> https://travis-ci.org/apache/flink/jobs/653454540
> {code}
> ==
> Running 'Streaming bucketing end-to-end test'
> ==
> TEST_DATA_DIR: 
> /home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-05739414867
> Flink dist directory: 
> /home/travis/build/apache/flink/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
> Setting up SSL with: internal JDK dynamic
> Using SAN 
> dns:travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7,ip:10.20.0.145,ip:172.17.0.1
> Certificate was added to keystore
> Certificate was added to keystore
> Certificate reply was installed in keystore
> MAC verified OK
> Setting up SSL with: rest JDK dynamic
> Using SAN 
> dns:travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7,ip:10.20.0.145,ip:172.17.0.1
> Certificate was added to keystore
> Certificate was added to keystore
> Certificate reply was installed in keystore
> MAC verified OK
> Mutual ssl auth: false
> Starting cluster.
> Starting standalonesession daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Waiting for Dispatcher REST endpoint to come up...
> Dispatcher REST endpoint is up.
> [INFO] 1 instance(s) of taskexecutor are already running on 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> [INFO] 2 instance(s) of taskexecutor are already running on 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> [INFO] 3 instance(s) of taskexecutor are already running on 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Starting taskexecutor daemon on host 
> travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
> Number of running task managers 1 is not yet 4.
> Number of running task managers 2 is not yet 4.
> Number of running task managers has reached 4.
> java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
>   at java.lang.Class.getDeclaredMethods0(Native Method)
>   at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
>   at java.lang.Class.getDeclaredMethod(Class.java:2128)
>   at 
> org.apache.flink.api.java.ClosureCleaner.usesCustomSerialization(ClosureCleaner.java:164)
>   at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:89)
>   at 
> org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:71)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1820)
>   at 
> org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:188)
>   at 
> org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1328)
>   at 
> org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:84)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
>   at 

[jira] [Created] (FLINK-16227) Streaming bucketing end-to-end test / test_streaming_bucketing.sh unstable

2020-02-21 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-16227:
--

 Summary: Streaming bucketing end-to-end test / 
test_streaming_bucketing.sh unstable
 Key: FLINK-16227
 URL: https://issues.apache.org/jira/browse/FLINK-16227
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


This nightly cron job has failed: 
https://travis-ci.org/apache/flink/jobs/653454540

{code}
==
Running 'Streaming bucketing end-to-end test'
==
TEST_DATA_DIR: 
/home/travis/build/apache/flink/flink-end-to-end-tests/test-scripts/temp-test-directory-05739414867
Flink dist directory: 
/home/travis/build/apache/flink/flink-dist/target/flink-1.11-SNAPSHOT-bin/flink-1.11-SNAPSHOT
Setting up SSL with: internal JDK dynamic
Using SAN 
dns:travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7,ip:10.20.0.145,ip:172.17.0.1
Certificate was added to keystore
Certificate was added to keystore
Certificate reply was installed in keystore
MAC verified OK
Setting up SSL with: rest JDK dynamic
Using SAN 
dns:travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7,ip:10.20.0.145,ip:172.17.0.1
Certificate was added to keystore
Certificate was added to keystore
Certificate reply was installed in keystore
MAC verified OK
Mutual ssl auth: false
Starting cluster.
Starting standalonesession daemon on host 
travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
Starting taskexecutor daemon on host 
travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
Waiting for Dispatcher REST endpoint to come up...
Waiting for Dispatcher REST endpoint to come up...
Waiting for Dispatcher REST endpoint to come up...
Waiting for Dispatcher REST endpoint to come up...
Waiting for Dispatcher REST endpoint to come up...
Dispatcher REST endpoint is up.
[INFO] 1 instance(s) of taskexecutor are already running on 
travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
Starting taskexecutor daemon on host 
travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
[INFO] 2 instance(s) of taskexecutor are already running on 
travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
Starting taskexecutor daemon on host 
travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
[INFO] 3 instance(s) of taskexecutor are already running on 
travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
Starting taskexecutor daemon on host 
travis-job-b9e26d64-0a62-42c7-9802-6c49defb4ad7.
Number of running task managers 1 is not yet 4.
Number of running task managers 2 is not yet 4.
Number of running task managers has reached 4.
java.lang.NoClassDefFoundError: org/apache/hadoop/conf/Configuration
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.getDeclaredMethod(Class.java:2128)
at 
org.apache.flink.api.java.ClosureCleaner.usesCustomSerialization(ClosureCleaner.java:164)
at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:89)
at 
org.apache.flink.api.java.ClosureCleaner.clean(ClosureCleaner.java:71)
at 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.clean(StreamExecutionEnvironment.java:1820)
at 
org.apache.flink.streaming.api.datastream.DataStream.clean(DataStream.java:188)
at 
org.apache.flink.streaming.api.datastream.DataStream.addSink(DataStream.java:1328)
at 
org.apache.flink.streaming.tests.BucketingSinkTestProgram.main(BucketingSinkTestProgram.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:321)
at 
org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:205)
at 
org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:138)
at 
org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:664)
at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213)
at 
org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:895)
at 
org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:968)
at 
org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:968)
Caused by: java.lang.ClassNotFoundException: 
org.apache.hadoop.conf.Configuration
at 

[jira] [Comment Edited] (FLINK-13550) Support for CPU FlameGraphs in new web UI

2020-02-21 Thread Minko Gechev (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042242#comment-17042242
 ] 

Minko Gechev edited comment on FLINK-13550 at 2/21/20 11:42 PM:


Recently I developed a FlameGraph for Angular, which might come handy. You can 
find the component here [https://ngx-flamegraph.firebaseapp.com/].


was (Author: mgechev):
Recently I developed a FlameGraph for Angular, which might come handy. You can 
find the component [here|[https://ngx-flamegraph.firebaseapp.com/]].

> Support for CPU FlameGraphs in new web UI
> -
>
> Key: FLINK-13550
> URL: https://issues.apache.org/jira/browse/FLINK-13550
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: David Morávek
>Assignee: David Morávek
>Priority: Major
>
> For a better insight into a running job, it would be useful to have ability 
> to render a CPU flame graph for a particular job vertex.
> Flink already has a stack-trace sampling mechanism in-place, so it should be 
> straightforward to implement.
> This should be done by implementing a new endpoint in REST API, which would 
> sample the stack-trace the same way as current BackPressureTracker does, only 
> with a different sampling rate and length of sampling.
> [Here|https://www.youtube.com/watch?v=GUNDehj9z9o] is a little demo of the 
> feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13550) Support for CPU FlameGraphs in new web UI

2020-02-21 Thread Minko Gechev (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042242#comment-17042242
 ] 

Minko Gechev commented on FLINK-13550:
--

Recently I developed a FlameGraph for Angular, which might come handy. You can 
find the component [here|[https://ngx-flamegraph.firebaseapp.com/]].

> Support for CPU FlameGraphs in new web UI
> -
>
> Key: FLINK-13550
> URL: https://issues.apache.org/jira/browse/FLINK-13550
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / REST, Runtime / Web Frontend
>Reporter: David Morávek
>Assignee: David Morávek
>Priority: Major
>
> For a better insight into a running job, it would be useful to have ability 
> to render a CPU flame graph for a particular job vertex.
> Flink already has a stack-trace sampling mechanism in-place, so it should be 
> straightforward to implement.
> This should be done by implementing a new endpoint in REST API, which would 
> sample the stack-trace the same way as current BackPressureTracker does, only 
> with a different sampling rate and length of sampling.
> [Here|https://www.youtube.com/watch?v=GUNDehj9z9o] is a little demo of the 
> feature.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11177: [FLINK-16219][runtime] Made AsyncWaitOperator chainable to non-sources.

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11177: [FLINK-16219][runtime] Made 
AsyncWaitOperator chainable to non-sources.
URL: https://github.com/apache/flink/pull/11177#issuecomment-589720432
 
 
   
   ## CI report:
   
   * 61b28a39aa682892a746495931949ab20dd1979c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/150047217) 
   * 3d36bd3136fd8b56ee367ee29dc9c92beaf4542a Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/150094881) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5441)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11179: [FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and remove remaining bits of "legacy state"

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11179: 
[FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and 
remove remaining bits of "legacy state"
URL: https://github.com/apache/flink/pull/11179#issuecomment-589793675
 
 
   
   ## CI report:
   
   * 247bd519b8609610c80704bb4a7c0e5a8375494a Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150081947) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5440)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11179: [FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and remove remaining bits of "legacy state"

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11179: 
[FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and 
remove remaining bits of "legacy state"
URL: https://github.com/apache/flink/pull/11179#issuecomment-589793675
 
 
   
   ## CI report:
   
   * 247bd519b8609610c80704bb4a7c0e5a8375494a Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150081947) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5440)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11177: [FLINK-16219][runtime] Made AsyncWaitOperator chainable to non-sources.

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11177: [FLINK-16219][runtime] Made 
AsyncWaitOperator chainable to non-sources.
URL: https://github.com/apache/flink/pull/11177#issuecomment-589720432
 
 
   
   ## CI report:
   
   * 61b28a39aa682892a746495931949ab20dd1979c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/150047217) 
   * 3d36bd3136fd8b56ee367ee29dc9c92beaf4542a Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150094881) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5441)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11159: [FLINK-16188][e2e] AutoClosableProcess constructor now private

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11159: [FLINK-16188][e2e] 
AutoClosableProcess constructor now private
URL: https://github.com/apache/flink/pull/11159#issuecomment-589070207
 
 
   
   ## CI report:
   
   * 273e92d7b571cccf3220e48e1bbf6da24d40828f Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/149821994) 
   * f5c6a11bb9af6af83b39559240355ef9585ab29c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149851329) 
   * 20a56e7c55f419fc111c57d53e62d08a28c30427 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/150026288) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5427)
 
   * 6576a8786fa6f159b0ba05034ecb17e4c52444ad Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/150075488) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5438)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11178: [hotfix] Stabilize python tests

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11178: [hotfix] Stabilize python tests
URL: https://github.com/apache/flink/pull/11178#issuecomment-589751914
 
 
   
   ## CI report:
   
   * b2e002eb9b0f340f4efd6e79b22c19d9b588993f Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150066458) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5436)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11177: [FLINK-16219][runtime] Made AsyncWaitOperator chainable to non-sources.

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11177: [FLINK-16219][runtime] Made 
AsyncWaitOperator chainable to non-sources.
URL: https://github.com/apache/flink/pull/11177#issuecomment-589720432
 
 
   
   ## CI report:
   
   * 61b28a39aa682892a746495931949ab20dd1979c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/150047217) 
   * 3d36bd3136fd8b56ee367ee29dc9c92beaf4542a Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150094881) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7702: [FLINK-11088][Security][YARN] Allow YARN to discover pre-installed keytab files

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #7702: [FLINK-11088][Security][YARN] Allow 
YARN to discover pre-installed keytab files
URL: https://github.com/apache/flink/pull/7702#issuecomment-572195960
 
 
   
   ## CI report:
   
   * 72dd07f5f10a56adf6025e82083af21ada47c711 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143614040) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4198)
 
   * f2387288cb33f288164ed9d102b47868a93dc898 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148710964) Azure: 
[CANCELED](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5118)
 
   * 68496011f53df6a9933c2d8723d8f5995cc96111 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149728083) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5333)
 
   * 0974d563d5643009ce91f16eb3d10bb1c6704883 UNKNOWN
   * d17c4e19e6ca11fe376de3530651ebad460f351e Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/150066416) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5434)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11178: [hotfix] Stabilize python tests

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11178: [hotfix] Stabilize python tests
URL: https://github.com/apache/flink/pull/11178#issuecomment-589751914
 
 
   
   ## CI report:
   
   * b2e002eb9b0f340f4efd6e79b22c19d9b588993f Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150066458) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5436)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11177: [FLINK-16219][runtime] Made AsyncWaitOperator chainable to non-sources.

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11177: [FLINK-16219][runtime] Made 
AsyncWaitOperator chainable to non-sources.
URL: https://github.com/apache/flink/pull/11177#issuecomment-589720432
 
 
   
   ## CI report:
   
   * 61b28a39aa682892a746495931949ab20dd1979c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/150047217) 
   * 3d36bd3136fd8b56ee367ee29dc9c92beaf4542a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11159: [FLINK-16188][e2e] AutoClosableProcess constructor now private

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11159: [FLINK-16188][e2e] 
AutoClosableProcess constructor now private
URL: https://github.com/apache/flink/pull/11159#issuecomment-589070207
 
 
   
   ## CI report:
   
   * 273e92d7b571cccf3220e48e1bbf6da24d40828f Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/149821994) 
   * f5c6a11bb9af6af83b39559240355ef9585ab29c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149851329) 
   * 20a56e7c55f419fc111c57d53e62d08a28c30427 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/150026288) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5427)
 
   * 6576a8786fa6f159b0ba05034ecb17e4c52444ad Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150075488) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5438)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #7702: [FLINK-11088][Security][YARN] Allow YARN to discover pre-installed keytab files

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #7702: [FLINK-11088][Security][YARN] Allow 
YARN to discover pre-installed keytab files
URL: https://github.com/apache/flink/pull/7702#issuecomment-572195960
 
 
   
   ## CI report:
   
   * 72dd07f5f10a56adf6025e82083af21ada47c711 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/143614040) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=4198)
 
   * f2387288cb33f288164ed9d102b47868a93dc898 Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/148710964) Azure: 
[CANCELED](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5118)
 
   * 68496011f53df6a9933c2d8723d8f5995cc96111 Travis: 
[SUCCESS](https://travis-ci.com/flink-ci/flink/builds/149728083) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5333)
 
   * 0974d563d5643009ce91f16eb3d10bb1c6704883 UNKNOWN
   * d17c4e19e6ca11fe376de3530651ebad460f351e Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150066416) Azure: 
[SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5434)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Closed] (FLINK-16223) Flat aggregate misuses key group ranges

2020-02-21 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16223?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman closed FLINK-16223.

Resolution: Not A Problem

> Flat aggregate misuses key group ranges
> ---
>
> Key: FLINK-16223
> URL: https://issues.apache.org/jira/browse/FLINK-16223
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Seth Wiesman
>Priority: Major
>
> The implementation of flat aggregate appears to misuse key group ranges. When 
> we add a check in AbstractStreamOperator that the current key belongs to the 
> key group assigned to that subtask, tests in TableAggregateITCase begin to 
> fail. 
> This patch can be used to reproduce the issue[1]. 
> https://github.com/sjwiesman/flink/tree/keygrouprangecheck
> cc [~jincheng] [~hequn8128]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-16223) Flat aggregate misuses key group ranges

2020-02-21 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-16223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17042150#comment-17042150
 ] 

Seth Wiesman commented on FLINK-16223:
--

I did a clean install of master and ran the tests 100 times in a loop without 
failure, I'm going to assume that I had a something go wrong with my local 
maven cache. I will close this ticket.  

> Flat aggregate misuses key group ranges
> ---
>
> Key: FLINK-16223
> URL: https://issues.apache.org/jira/browse/FLINK-16223
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Planner
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Seth Wiesman
>Priority: Major
>
> The implementation of flat aggregate appears to misuse key group ranges. When 
> we add a check in AbstractStreamOperator that the current key belongs to the 
> key group assigned to that subtask, tests in TableAggregateITCase begin to 
> fail. 
> This patch can be used to reproduce the issue[1]. 
> https://github.com/sjwiesman/flink/tree/keygrouprangecheck
> cc [~jincheng] [~hequn8128]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on issue #11179: [FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and remove remaining bits of "legacy state"

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11179: 
[FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and 
remove remaining bits of "legacy state"
URL: https://github.com/apache/flink/pull/11179#issuecomment-589793675
 
 
   
   ## CI report:
   
   * 247bd519b8609610c80704bb4a7c0e5a8375494a Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150081947) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11177: [FLINK-16219][runtime] Made AsyncWaitOperator chainable to non-sources.

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11177: [FLINK-16219][runtime] Made 
AsyncWaitOperator chainable to non-sources.
URL: https://github.com/apache/flink/pull/11177#issuecomment-589720432
 
 
   
   ## CI report:
   
   * 61b28a39aa682892a746495931949ab20dd1979c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/150047217) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11179: [FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and remove remaining bits of "legacy state"

2020-02-21 Thread GitBox
flinkbot commented on issue #11179: [FLINK-16178][FLINK-16192][checkpointing] 
Clean up checkpoint metadata code and remove remaining bits of "legacy state"
URL: https://github.com/apache/flink/pull/11179#issuecomment-589793675
 
 
   
   ## CI report:
   
   * 247bd519b8609610c80704bb4a7c0e5a8375494a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot edited a comment on issue #11159: [FLINK-16188][e2e] AutoClosableProcess constructor now private

2020-02-21 Thread GitBox
flinkbot edited a comment on issue #11159: [FLINK-16188][e2e] 
AutoClosableProcess constructor now private
URL: https://github.com/apache/flink/pull/11159#issuecomment-589070207
 
 
   
   ## CI report:
   
   * 273e92d7b571cccf3220e48e1bbf6da24d40828f Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/149821994) 
   * f5c6a11bb9af6af83b39559240355ef9585ab29c Travis: 
[FAILURE](https://travis-ci.com/flink-ci/flink/builds/149851329) 
   * 20a56e7c55f419fc111c57d53e62d08a28c30427 Travis: 
[CANCELED](https://travis-ci.com/flink-ci/flink/builds/150026288) Azure: 
[FAILURE](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5427)
 
   * 6576a8786fa6f159b0ba05034ecb17e4c52444ad Travis: 
[PENDING](https://travis-ci.com/flink-ci/flink/builds/150075488) 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] walterddr commented on a change in pull request #10995: [FLINK-15847][ml] Include flink-ml-api and flink-ml-lib in opt

2020-02-21 Thread GitBox
walterddr commented on a change in pull request #10995: [FLINK-15847][ml] 
Include flink-ml-api and flink-ml-lib in opt
URL: https://github.com/apache/flink/pull/10995#discussion_r382748084
 
 

 ##
 File path: flink-ml-parent/flink-ml-lib/pom.xml
 ##
 @@ -57,4 +57,30 @@ under the License.
1.1.2


+
+   
+   
+   
+   
+   org.apache.maven.plugins
+   maven-shade-plugin
+   
+   
+   shade-flink
+   package
+   
+   shade
+   
+   
+   
+   
+   
com.github.fommil.netlib:core
 
 Review comment:
   hmm. in fact the flink-ml-* packages are never release/bundled anyway. we 
should revert the LICENSE notice (in fact it shouldn't contain any LICENSE 
since only source codes were released: 
https://repo1.maven.org/maven2/org/apache/flink/flink-ml-lib_2.12/1.10.0/.
   
   How should we proceed? should we have a revert in release-1.10 branch only 
and keep this PR intact? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16226) Add back pressure to HttpFunction.

2020-02-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16226:
---
Labels: pull-request-available  (was: )

> Add back pressure to HttpFunction.
> --
>
> Key: FLINK-16226
> URL: https://issues.apache.org/jira/browse/FLINK-16226
> Project: Flink
>  Issue Type: Task
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Assignee: Igal Shilman
>Priority: Major
>  Labels: pull-request-available
>
> Recently a simple back pressure mechanism was introduced to stateful 
> functions.
> Now it can be used from the HttpFunction to keep the request backlog under 
> control.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-statefun] igalshilman opened a new pull request #30: [FLINK-16226] Add Backpressure to HttpFunction

2020-02-21 Thread GitBox
igalshilman opened a new pull request #30: [FLINK-16226] Add Backpressure to 
HttpFunction
URL: https://github.com/apache/flink-statefun/pull/30
 
 
   ## This PR uses the back pressure mechanism introduced in #29 to support 
back pressure in `HttpFunction`.
   
   ## Background  
   When an Http remote function (`HttpFunction`) invokes a function remotely 
for a specific address, it would wait for a response to come back 
asynchronously. While it waits for 
   the response, new messages for that address might arrive, the `HttpFunction` 
start logging these requests in a persisted state, and when the original 
request completes the `HttpFunction` would batch these requests and send it off 
as a single request.
   
   In order to keep that state under control, the `HttpFunction` needs to limit 
the maximum allowed batch size, this PR provides that.

   
   ## User supplied property
   The first part of this PR adds a property `maxBatchSize` to the function 
spec in `module.yaml`
   
   ```
   - function:
 meta:
   kind: http
   type: org.foo/bar
 spec:
  ...
   maxBatch: 10
   ``` 
   
   This would indicate that the max batch size that would be sent, per address, 
for the function `org.foo/bar` is limited to 10 messages. Users can also omit 
that value and then the default value of 1,000 is selected.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] flinkbot commented on issue #11179: [FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and remove remaining bits of "legacy state"

2020-02-21 Thread GitBox
flinkbot commented on issue #11179: [FLINK-16178][FLINK-16192][checkpointing] 
Clean up checkpoint metadata code and remove remaining bits of "legacy state"
URL: https://github.com/apache/flink/pull/11179#issuecomment-589783809
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 247bd519b8609610c80704bb4a7c0e5a8375494a (Fri Feb 21 
18:47:30 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [flink] StephanEwen opened a new pull request #11179: [FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and remove remaining bits of "legacy state"

2020-02-21 Thread GitBox
StephanEwen opened a new pull request #11179: 
[FLINK-16178][FLINK-16192][checkpointing] Clean up checkpoint metadata code and 
remove remaining bits of "legacy state"
URL: https://github.com/apache/flink/pull/11179
 
 
   ## NOTE: This removes savepoint compatibility with Flink 1.2 !
   
   For background, please consult 
[FLINK-16192](https://issues.apache.org/jira/browse/FLINK-16192)
   
   ## What is the purpose of the change
   
   **This is part of the preparation for implementing Fault Tolerance for 
Operator Coordinators for the new Source Interface (FLIP-27).**
   
   This PR contains various cleanups / simplifications / refactorings around 
the checkpoint metadata serialization / deserialization code, to make it easier 
to implement the handling of Operator Coordinator State.
   
   ...and to finally drop some really old and unused code. I felt a bit like an 
archaeologist digging through this ;-)
   
   
   ## Brief change log
   
   The main changes are:
   
 - 5436317bc99830bdc21b5799fcabf8f833120932 : *[FLINK-16178][refactor] Make 
the versioned Checkpoint Metadata Serializers only responsible for 
deserialization.*
   
   The metadata is always serialized with the latest serializer. 
Deserialization uses a version-specific serializer. That is the same 
functionality as before, but cleanly represented in the code. The code 
previously pretended to be version-dynamic on serialization side as well, but 
still could only handle the latest version on serialization. Earlier versions 
failed with a `ClassCastException` or `UnsupportedMethodException`.
   
   Weird constructs were necessary, like a `SavepointV1Serializer 
implements SavepointSerializer` (mismatch between handled version 
and generic type).
   
 - 247bd519b8609610c80704bb4a7c0e5a8375494a : *[FLINK-16192][checkpointing] 
Remove remaining bits of "legacy state" and remove Savepoint 1.2 compatibility*
   
   "Legacy State" Refers to the state originally created by the old 
`Checkpointed` interface, before state was re-scalable. This was later replaced 
by `CheckpointedFunction` (and `ListCheckpointed` as the shortcut).
   
   This state was no longer supported since Flink 1.4. However, all 1.2 
savepoints in migration tests (and some 1.3 savepoints for tests) used that 
"legacy state", that is why the code was retained and support for "legacy 
state" was hackily activated (via some static flags) for migration tests.
   
   Removing that code would actually not reduce any savepoint 
compatibility, because compatibility for 1.2 and 1.3 was anyways only given for 
jobs that did not use that "legacy state". However, removing this code makes 
all 1.2 savepoints from tests unusable.
   
   A [mailing list 
discussion](https://lists.apache.org/x/thread.html/rd36579fe754f7557da4ae1dda47820c84b21009c35e6e317671e83b0@%3Cdev.flink.apache.org%3E)
 suggested to drop that support, tather than elaborate re-building of these the 
test data for Flink 1.2 with different state type.
   
   ## Verifying this change
   
   This change is already covered by existing tests, such as the savepoint 
tests and savepoint migration tests.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): **no**
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: **no**
 - The serializers: **no**
 - The runtime per-record code paths (performance sensitive): **no**
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: **yes**
 - The S3 file system connector: **no**
   
   ## Documentation
   
 - Does this pull request introduce a new feature? **no**
 - If yes, how is the feature documented? **not applicable**
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (FLINK-16178) Prerequisite cleanups and refactorings in the Checkpoint Coordinator

2020-02-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16178:
---
Labels: pull-request-available  (was: )

> Prerequisite cleanups and refactorings in the Checkpoint Coordinator
> 
>
> Key: FLINK-16178
> URL: https://issues.apache.org/jira/browse/FLINK-16178
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Stephan Ewen
>Assignee: Stephan Ewen
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Umbrella issue for various small refactorings done as a prerequisite to the 
> implementation of coordinator checkpoints, as well as various small cleanups 
> of tech debt and inconsistencies in the checkpoint coordinator and related 
> components.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   3   4   5   6   >