[GitHub] [flink] wtog commented on pull request #12179: [FLINK-16144] get client.timeout for the client, with a fallback to the akka.client…

2020-05-21 Thread GitBox


wtog commented on pull request #12179:
URL: https://github.com/apache/flink/pull/12179#issuecomment-632498106


   hi @aljoscha and @kl0u this pr is updated, please help to review 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on a change in pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…

2020-05-21 Thread GitBox


klion26 commented on a change in pull request #12237:
URL: https://github.com/apache/flink/pull/12237#discussion_r429050613



##
File path: docs/training/streaming_analytics.zh.md
##
@@ -29,123 +29,99 @@ under the License.
 
 ## Event Time and Watermarks
 
-### Introduction
+### 概要
 
-Flink explicitly supports three different notions of time:
+Flink 明确支持以下三种时间语义:
 
-* _event time:_ the time when an event occurred, as recorded by the device 
producing (or storing) the event
+* _事件时间:_ 事件产生的时间,记录的是设备生产(或者存储)事件的时间
 
-* _ingestion time:_ a timestamp recorded by Flink at the moment it ingests the 
event
+* _摄取时间:_ Flink 提取事件时记录的时间戳
 
-* _processing time:_ the time when a specific operator in your pipeline is 
processing the event
+* _处理时间:_ Flink pipeline 中具体算子处理事件的时间
 
-For reproducible results, e.g., when computing the maximum price a stock 
reached during the first
-hour of trading on a given day, you should use event time. In this way the 
result won't depend on
-when the calculation is performed. This kind of real-time application is 
sometimes performed using
-processing time, but then the results are determined by the events that happen 
to be processed
-during that hour, rather than the events that occurred then. Computing 
analytics based on processing
-time causes inconsistencies, and makes it difficult to re-analyze historic 
data or test new
-implementations.
+为了获得可重现的结果,例如在计算过去的特定一天里第一个小时股票的最高价格时,我们应该使用事件时间。这样的话,无论
+什么时间去计算都不会影响输出结果。然而有些人,在实时计算应用时使用处理时间,这样的话,输出结果就会被处理时间点所决
+定,而不是事件的生成时间。基于处理时间会导致多次计算的结果不一致,也可能会导致重新分析历史数据和测试变得异常困难。
 
-### Working with Event Time
+### 使用 Event Time
 
-By default, Flink will use processing time. To change this, you can set the 
Time Characteristic:
+Flink 在默认情况下使用处理时间。也可以通过如下配置来告诉 Flink 选择哪种时间语义:
 
 {% highlight java %}
 final StreamExecutionEnvironment env =
 StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
 {% endhighlight %}
 
-If you want to use event time, you will also need to supply a Timestamp 
Extractor and Watermark
-Generator that Flink will use to track the progress of event time. This will 
be covered in the
-section below on [Working with Watermarks]({% link
-training/streaming_analytics.zh.md %}#working-with-watermarks), but first we 
should explain what
-watermarks are.
+如果想要使用事件时间,则需要额外给 Flink 提供一个时间戳的提取器和 Watermark 生成器,Flink 将使用它们来跟踪事件时间的进度。这
+将在选节[使用Watermarks]({% linktutorials/streaming_analytics.zh.md 
%}#使用Watermarks)中介绍,但是首先我们需要解释一下

Review comment:
   ```suggestion
   将在选节[使用Watermarks]({% link training/streaming_analytics.zh.md 
%}#使用Watermarks)中介绍,但是首先我们需要解释一下
   ```
   这个在本地执行 `sh docs/build.sh -p` 的时候会有报错信息的,英文原文中是有空行的,因此 link 和 tutorials 
中间就有空格了
   另外翻译不太建议在翻译 PR 中直接修改一些不确定的内容,比如这里的把 training 改成 
`toturials`,除非肯定这个地方是需要修改的,如果不是非常肯定,建议抛出来大家讨论下再决定,避免返工浪费你的时间~
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] klion26 commented on a change in pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…

2020-05-21 Thread GitBox


klion26 commented on a change in pull request #12237:
URL: https://github.com/apache/flink/pull/12237#discussion_r429050101



##
File path: docs/training/streaming_analytics.zh.md
##
@@ -3,7 +3,7 @@ title: Streaming Analytics
 nav-id: analytics
 nav-pos: 4
 nav-title: Streaming Analytics
-nav-parent_id: training
+nav-parent_id: tutorials

Review comment:
   我的意思是,最新版的 英文 文档中这里是 `training` 为什么要把 `training` 再改成 `tutorials` 呢?你给的 
PR 中也是把 `tutorials` 改成 `training`,你这样的改动相当于又改回去了。

##
File path: docs/training/streaming_analytics.zh.md
##
@@ -29,123 +29,99 @@ under the License.
 
 ## Event Time and Watermarks
 
-### Introduction
+### 概要
 
-Flink explicitly supports three different notions of time:
+Flink 明确支持以下三种时间语义:
 
-* _event time:_ the time when an event occurred, as recorded by the device 
producing (or storing) the event
+* _事件时间:_ 事件产生的时间,记录的是设备生产(或者存储)事件的时间
 
-* _ingestion time:_ a timestamp recorded by Flink at the moment it ingests the 
event
+* _摄取时间:_ Flink 提取事件时记录的时间戳
 
-* _processing time:_ the time when a specific operator in your pipeline is 
processing the event
+* _处理时间:_ Flink pipeline 中具体算子处理事件的时间
 
-For reproducible results, e.g., when computing the maximum price a stock 
reached during the first
-hour of trading on a given day, you should use event time. In this way the 
result won't depend on
-when the calculation is performed. This kind of real-time application is 
sometimes performed using
-processing time, but then the results are determined by the events that happen 
to be processed
-during that hour, rather than the events that occurred then. Computing 
analytics based on processing
-time causes inconsistencies, and makes it difficult to re-analyze historic 
data or test new
-implementations.
+为了获得可重现的结果,例如在计算过去的特定一天里第一个小时股票的最高价格时,我们应该使用事件时间。这样的话,无论
+什么时间去计算都不会影响输出结果。然而有些人,在实时计算应用时使用处理时间,这样的话,输出结果就会被处理时间点所决
+定,而不是事件的生成时间。基于处理时间会导致多次计算的结果不一致,也可能会导致重新分析历史数据和测试变得异常困难。
 
-### Working with Event Time
+### 使用 Event Time
 
-By default, Flink will use processing time. To change this, you can set the 
Time Characteristic:
+Flink 在默认情况下使用处理时间。也可以通过如下配置来告诉 Flink 选择哪种时间语义:
 
 {% highlight java %}
 final StreamExecutionEnvironment env =
 StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
 {% endhighlight %}
 
-If you want to use event time, you will also need to supply a Timestamp 
Extractor and Watermark
-Generator that Flink will use to track the progress of event time. This will 
be covered in the
-section below on [Working with Watermarks]({% link
-training/streaming_analytics.zh.md %}#working-with-watermarks), but first we 
should explain what
-watermarks are.
+如果想要使用事件时间,则需要额外给 Flink 提供一个时间戳的提取器和 Watermark 生成器,Flink 将使用它们来跟踪事件时间的进度。这
+将在选节[使用Watermarks]({% linktutorials/streaming_analytics.zh.md 
%}#使用Watermarks)中介绍,但是首先我们需要解释一下

Review comment:
   ```suggestion
   将在选节[使用Watermarks]({% link training/streaming_analytics.zh.md 
%}#使用Watermarks)中介绍,但是首先我们需要解释一下
   ```
   这个在本地执行 `sh docs/build.sh -p` 的时候会有报错信息的,英文原文中是有空行的,因此 link 和 tutorials 
中间就有空格了
   另外翻译不太建议直接修改,比如这里的把 training 改成 
`toturials`,除非肯定这个地方是需要修改的,如果不是非常肯定,建议抛出来大家讨论下再决定,避免返工浪费你的时间~
   

##
File path: docs/training/streaming_analytics.zh.md
##
@@ -29,123 +29,99 @@ under the License.
 
 ## Event Time and Watermarks
 
-### Introduction
+### 概要
 
-Flink explicitly supports three different notions of time:
+Flink 明确支持以下三种时间语义:
 
-* _event time:_ the time when an event occurred, as recorded by the device 
producing (or storing) the event
+* _事件时间:_ 事件产生的时间,记录的是设备生产(或者存储)事件的时间
 
-* _ingestion time:_ a timestamp recorded by Flink at the moment it ingests the 
event
+* _摄取时间:_ Flink 提取事件时记录的时间戳

Review comment:
   “读取事件“ 会更好一些吗?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17690) Python function wrapper omits docstr

2020-05-21 Thread UnityLung (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113768#comment-17113768
 ] 

UnityLung commented on FLINK-17690:
---

Hi,

I would like to fix this.

Could you please assign it to me?

Thanks a lot.

> Python function wrapper omits docstr
> 
>
> Key: FLINK-17690
> URL: https://issues.apache.org/jira/browse/FLINK-17690
> Project: Flink
>  Issue Type: Improvement
>  Components: Stateful Functions
>Reporter: Igal Shilman
>Priority: Minor
>
> Statefun Python SDK has a connivance bind method, that wraps a functions.
> The 
> [wrapper|https://github.com/apache/flink-statefun/blob/master/statefun-python-sdk/statefun/core.py#L182]
>  would omit the docstr of the wrapper function. A common practice would be to 
> use [https://docs.python.org/3/library/functools.html#functools.wraps]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17860) Recursively remove channel state directories

2020-05-21 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski closed FLINK-17860.
--
Resolution: Duplicate

It looks like this is a duplicate of FLINK-13856, which contains a valuable 
discussion, so I'm closing this one.

> Recursively remove channel state directories
> 
>
> Key: FLINK-17860
> URL: https://issues.apache.org/jira/browse/FLINK-17860
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Assignee: Roman Khachatryan
>Priority: Critical
> Fix For: 1.11.0
>
>
> With a high degree of parallelism, we end up with n*n number of files in each 
> checkpoint. Writing them if fast (from many subtasks), removing them is slow 
> (from JM).
> This can't be mitigated by state.backend.fs.memory-threshold because most 
> states are ten to hundreds Mb.
>  
> Instead of going through them 1 by 1, we could remove the directory 
> recursively.
>  
> The easiest way is to remove channelStateHandle.discard() calls and use 
> isRecursive=true  in 
> FsCompletedCheckpointStorageLocation.disposeStorageLocation.
> Note: with the current isRecursive=false there will be an exception if there 
> are any files left under that folder.
>  
> This can be extended to other state handles in future as well.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12289: [FLINK-17874][Connectors/HBase]Handling the NPE for hbase-connector

2020-05-21 Thread GitBox


flinkbot commented on pull request #12289:
URL: https://github.com/apache/flink/pull/12289#issuecomment-632492310


   
   ## CI report:
   
   * cda1c81b986bf87f0fd95abfdc60638813a43c0b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12282: [FLINK-17865][checkpoint] Increase default size of 'state.backend.fs.memory-threshold'

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12282:
URL: https://github.com/apache/flink/pull/12282#issuecomment-632089755


   
   ## CI report:
   
   * 09833d3d171cfdd17286f53d7917df6bbfe8a8c8 UNKNOWN
   * 81a8f34c3108c7efc0ed350a50b59c2829a99eff Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2030)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12289: [FLINK-17874][Connectors/HBase]Handling the NPE for hbase-connector

2020-05-21 Thread GitBox


flinkbot commented on pull request #12289:
URL: https://github.com/apache/flink/pull/12289#issuecomment-632489396


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit cda1c81b986bf87f0fd95abfdc60638813a43c0b (Fri May 22 
05:25:45 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-17874).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17874) Writing to hbase throws NPE

2020-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17874:
---
Labels: pull-request-available  (was: )

> Writing to hbase throws NPE
> ---
>
> Key: FLINK-17874
> URL: https://issues.apache.org/jira/browse/FLINK-17874
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.10.0
>Reporter: chaiyongqiang
>Priority: Major
>  Labels: pull-request-available
> Attachments: NPE.png
>
>
> Writing a table to hbase throw NPE when the field is NULL, we need handle it .
> Please refer to NPE.png for the detail stack. !NPE.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] cyq89051127 opened a new pull request #12289: [FLINK-17874][Connectors/HBase]Handling the NPE for hbase-connector

2020-05-21 Thread GitBox


cyq89051127 opened a new pull request #12289:
URL: https://github.com/apache/flink/pull/12289


   
   ## What is the purpose of the change
   Handling the NPE for HBase connector.
   
   ## Brief change log
   
   *adding the NULL check before really serializing the value*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup  and tested in my local 
computer.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (*no*)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (*no*)
 - The serializers: (*yes*)
 - The runtime per-record code paths (performance sensitive): (*no*)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (*no*)
 - The S3 file system connector: (*no*)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (*no*)
 - If yes, how is the feature documented? (not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12282: [FLINK-17865][checkpoint] Increase default size of 'state.backend.fs.memory-threshold'

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12282:
URL: https://github.com/apache/flink/pull/12282#issuecomment-632089755


   
   ## CI report:
   
   * 09833d3d171cfdd17286f53d7917df6bbfe8a8c8 UNKNOWN
   * 3cc09ea7ebbfc658371206c1369aba9f061cca49 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2028)
 
   * 81a8f34c3108c7efc0ed350a50b59c2829a99eff UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12287: [FLINK-16077][docs-zh] Translate Custom State Serialization page into Chinese

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12287:
URL: https://github.com/apache/flink/pull/12287#issuecomment-632456334


   
   ## CI report:
   
   * 02822f55955ef2c684e724fb254ba641f07d5871 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2025)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17875) Support state TTL for remote functions

2020-05-21 Thread Tzu-Li (Gordon) Tai (Jira)
Tzu-Li (Gordon) Tai created FLINK-17875:
---

 Summary: Support state TTL for remote functions
 Key: FLINK-17875
 URL: https://issues.apache.org/jira/browse/FLINK-17875
 Project: Flink
  Issue Type: Task
  Components: Stateful Functions
Reporter: Tzu-Li (Gordon) Tai
Assignee: Tzu-Li (Gordon) Tai
 Fix For: statefun-2.1.0


With FLINK-17644, we now have support for embedded functions.
This should be extended to remote functions, by allowing the module specs to 
define the TTL for declared remote function state.

With this, it is also likely that we need to uptick the version for the YAML 
module spec.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17874) Writing to hbase throws NPE

2020-05-21 Thread chaiyongqiang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113747#comment-17113747
 ] 

chaiyongqiang commented on FLINK-17874:
---

When writing a filed to hbase, we serialize ti to bytes. For now,  in  
*_HBaseTypeUtils.serializeFromObject_*  only check NULL value for STRING and 
BYTE, For other type , a NPE is throwed.

I think we need a check at the beginning of 
*_HBaseTypeUtils.serializeFromObject_*.

> Writing to hbase throws NPE
> ---
>
> Key: FLINK-17874
> URL: https://issues.apache.org/jira/browse/FLINK-17874
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.10.0
>Reporter: chaiyongqiang
>Priority: Major
> Attachments: NPE.png
>
>
> Writing a table to hbase throw NPE when the field is NULL, we need handle it .
> Please refer to NPE.png for the detail stack. !NPE.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17874) Writing to hbase throws NPE

2020-05-21 Thread chaiyongqiang (Jira)
chaiyongqiang created FLINK-17874:
-

 Summary: Writing to hbase throws NPE
 Key: FLINK-17874
 URL: https://issues.apache.org/jira/browse/FLINK-17874
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HBase
Affects Versions: 1.10.0
Reporter: chaiyongqiang
 Attachments: NPE.png

Writing a table to hbase throw NPE when the field is NULL, we need handle it .

Please refer to NPE.png for the detail stack. !NPE.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12269:
URL: https://github.com/apache/flink/pull/12269#issuecomment-631541996


   
   ## CI report:
   
   * 24c44fd00652a6b5859075b3afea1e4e9ca98445 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1960)
 
   * f415b350558bea9b17e12638e70efc701f06c14d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2029)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12282: [FLINK-17865][checkpoint] Increase default size of 'state.backend.fs.memory-threshold'

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12282:
URL: https://github.com/apache/flink/pull/12282#issuecomment-632089755


   
   ## CI report:
   
   * 09833d3d171cfdd17286f53d7917df6bbfe8a8c8 UNKNOWN
   * 3cc09ea7ebbfc658371206c1369aba9f061cca49 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2028)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12287: [FLINK-16077][docs-zh] Translate Custom State Serialization page into Chinese

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12287:
URL: https://github.com/apache/flink/pull/12287#issuecomment-632456334


   
   ## CI report:
   
   * 02822f55955ef2c684e724fb254ba641f07d5871 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2025)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12269:
URL: https://github.com/apache/flink/pull/12269#issuecomment-631541996


   
   ## CI report:
   
   * 24c44fd00652a6b5859075b3afea1e4e9ca98445 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1960)
 
   * f415b350558bea9b17e12638e70efc701f06c14d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12282: [FLINK-17865][checkpoint] Increase default size of 'state.backend.fs.memory-threshold'

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12282:
URL: https://github.com/apache/flink/pull/12282#issuecomment-632089755


   
   ## CI report:
   
   * 67bc5db95d319972e8a27f9de71b5edd9c457287 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2006)
 
   * 09833d3d171cfdd17286f53d7917df6bbfe8a8c8 UNKNOWN
   * 3cc09ea7ebbfc658371206c1369aba9f061cca49 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12078:
URL: https://github.com/apache/flink/pull/12078#issuecomment-626611802


   
   ## CI report:
   
   * e3ffb15cc38bdbbf1f5a11014782d081edaecea6 UNKNOWN
   * d9fd98ae5664303184273f83e4afb0623e6406f0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2020)
 
   * 1142276931ccfd3ab42da2f44369e649c911aafc Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2027)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-17873) Add check for max concurrent checkpoints under UC mode

2020-05-21 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang reassigned FLINK-17873:


Assignee: Yuan Mei

> Add check for max concurrent checkpoints under UC mode
> --
>
> Key: FLINK-17873
> URL: https://issues.apache.org/jira/browse/FLINK-17873
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Checkpointing
>Reporter: Yuan Mei
>Assignee: Yuan Mei
>Priority: Major
> Fix For: 1.11.0
>
>
> Currently, the UC mode only supports max concurrent checkpoint number = 1.
> So we need to check whether the configured max allowed checkpoints are more 
> than 1 under the UC mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17873) Add check for max concurrent checkpoints under UC mode

2020-05-21 Thread Yuan Mei (Jira)
Yuan Mei created FLINK-17873:


 Summary: Add check for max concurrent checkpoints under UC mode
 Key: FLINK-17873
 URL: https://issues.apache.org/jira/browse/FLINK-17873
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Checkpointing
Reporter: Yuan Mei
 Fix For: 1.11.0


Currently, the UC mode only supports max concurrent checkpoint number = 1.

So we need to check whether the configured max allowed checkpoints are more 
than 1 under the UC mode.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12282: [FLINK-17865][checkpoint] Increase default size of 'state.backend.fs.memory-threshold'

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12282:
URL: https://github.com/apache/flink/pull/12282#issuecomment-632089755


   
   ## CI report:
   
   * 67bc5db95d319972e8a27f9de71b5edd9c457287 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2006)
 
   * 09833d3d171cfdd17286f53d7917df6bbfe8a8c8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12288: [FLINK-17870]. dependent jars are missing to be shipped to cluster in scala shell

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12288:
URL: https://github.com/apache/flink/pull/12288#issuecomment-632456377


   
   ## CI report:
   
   * 08bc3bf4a17f2e6a99ce5ff3af91ae1341254391 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2026)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17468) Provide more detailed metrics why asynchronous part of checkpoint is taking long time

2020-05-21 Thread Congxian Qiu(klion26) (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113726#comment-17113726
 ] 

Congxian Qiu(klion26) commented on FLINK-17468:
---

[~qqibrow]  How's it going? do you need any help?

> Provide more detailed metrics why asynchronous part of checkpoint is taking 
> long time
> -
>
> Key: FLINK-17468
> URL: https://issues.apache.org/jira/browse/FLINK-17468
> Project: Flink
>  Issue Type: New Feature
>  Components: Runtime / Checkpointing, Runtime / Metrics, Runtime / 
> State Backends
>Affects Versions: 1.10.0
>Reporter: Piotr Nowojski
>Priority: Major
>
> As [reported by 
> users|https://lists.apache.org/thread.html/r0833452796ca7d1c9d5e35c110089c95cfdadee9d81884a13613a4ce%40%3Cuser.flink.apache.org%3E]
>  it's not obvious why asynchronous part of checkpoint is taking so long time.
> Maybe we could provide some more detailed metrics/UI/logs about uploading 
> files, materializing meta data, or other things that are happening during the 
> asynchronous checkpoint process?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17872) Update StreamingFileSink documents to add avro formats

2020-05-21 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-17872:

Priority: Trivial  (was: Major)

> Update StreamingFileSink documents to add avro formats
> --
>
> Key: FLINK-17872
> URL: https://issues.apache.org/jira/browse/FLINK-17872
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, Documentation
>Reporter: Yun Gao
>Priority: Trivial
> Fix For: 1.11.0
>
>
> We added Avro-format for StreamingFileSink in [FLINK-11395 | 
> https://issues.apache.org/jira/browse/FLINK-11395], but did not update the 
> document to reflect that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu commented on pull request #12280: [FLINK-17866][python] Change the implementation of the LocalFileSystem#pathToFile to fix the test case failure of PyFlink when running on Wind

2020-05-21 Thread GitBox


dianfu commented on pull request #12280:
URL: https://github.com/apache/flink/pull/12280#issuecomment-632460864


   I'm not familiar with this part and so I'm not sure:
   1) If this change will introduce potential problems
   2) If there are also this kind of problems in the other codepaths
   
   @aljoscha could you help to review as I guess you're more familiar with 
this. Thanks a lot!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17872) Update StreamingFileSink documents to add avro formats

2020-05-21 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao updated FLINK-17872:

Component/s: Documentation
 Connectors / FileSystem

> Update StreamingFileSink documents to add avro formats
> --
>
> Key: FLINK-17872
> URL: https://issues.apache.org/jira/browse/FLINK-17872
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem, Documentation
>Reporter: Yun Gao
>Priority: Major
> Fix For: 1.11.0
>
>
> We added Avro-format for StreamingFileSink in [FLINK-11395 | 
> https://issues.apache.org/jira/browse/FLINK-11395], but did not update the 
> document to reflect that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-17872) Update StreamingFileSink documents to add avro formats

2020-05-21 Thread Yun Gao (Jira)
Yun Gao created FLINK-17872:
---

 Summary: Update StreamingFileSink documents to add avro formats
 Key: FLINK-17872
 URL: https://issues.apache.org/jira/browse/FLINK-17872
 Project: Flink
  Issue Type: Improvement
Reporter: Yun Gao
 Fix For: 1.11.0


We added Avro-format for StreamingFileSink in [FLINK-11395 | 
https://issues.apache.org/jira/browse/FLINK-11395], but did not update the 
document to reflect that.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] curcur edited a comment on pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


curcur edited a comment on pull request #12269:
URL: https://github.com/apache/flink/pull/12269#issuecomment-632459439







This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


curcur commented on pull request #12269:
URL: https://github.com/apache/flink/pull/12269#issuecomment-632459439


   address comments.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


curcur commented on a change in pull request #12269:
URL: https://github.com/apache/flink/pull/12269#discussion_r429024037



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java
##
@@ -262,6 +251,40 @@ public void failJobDueToTaskFailure(Throwable cause, 
ExecutionAttemptID failingT
}
}
 
+   @Test
+   public void testExpiredCheckpointExceedsTolerableFailureNumber() {
+   // create some mock Execution vertices that receive the 
checkpoint trigger messages
+   ExecutionVertex vertex1 = mockExecutionVertex(new 
ExecutionAttemptID());
+   ExecutionVertex vertex2 = mockExecutionVertex(new 
ExecutionAttemptID());
+
+   final String errorMsg = "Exceeded checkpoint failure tolerance 
number!";
+   CheckpointFailureManager checkpointFailureManager = 
getCheckpointFailureManager(errorMsg);
+   CheckpointCoordinator coord = getCheckpointCoordinator(new 
JobID(), vertex1, vertex2, checkpointFailureManager);
+
+   try {
+   // trigger the checkpoint. this should succeed
+   final CompletableFuture 
checkPointFuture = coord.triggerCheckpoint(false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
+   
assertFalse(checkPointFuture.isCompletedExceptionally());

Review comment:
   Hmm, I do not think it is used to test triggering. I guess it is to make 
sure the exception is from `coord.abortPendingCheckpoints`, not from other 
places like triggering.
   
   Maybe it is an overkill?
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


curcur commented on a change in pull request #12269:
URL: https://github.com/apache/flink/pull/12269#discussion_r429024037



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java
##
@@ -262,6 +251,40 @@ public void failJobDueToTaskFailure(Throwable cause, 
ExecutionAttemptID failingT
}
}
 
+   @Test
+   public void testExpiredCheckpointExceedsTolerableFailureNumber() {
+   // create some mock Execution vertices that receive the 
checkpoint trigger messages
+   ExecutionVertex vertex1 = mockExecutionVertex(new 
ExecutionAttemptID());
+   ExecutionVertex vertex2 = mockExecutionVertex(new 
ExecutionAttemptID());
+
+   final String errorMsg = "Exceeded checkpoint failure tolerance 
number!";
+   CheckpointFailureManager checkpointFailureManager = 
getCheckpointFailureManager(errorMsg);
+   CheckpointCoordinator coord = getCheckpointCoordinator(new 
JobID(), vertex1, vertex2, checkpointFailureManager);
+
+   try {
+   // trigger the checkpoint. this should succeed
+   final CompletableFuture 
checkPointFuture = coord.triggerCheckpoint(false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
+   
assertFalse(checkPointFuture.isCompletedExceptionally());

Review comment:
   I do not think it is testing triggering, I guess it is to make sure the 
exception is from `coord.abortPendingCheckpoints`, not from other places like 
triggering.
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12288: [FLINK-17870]. dependent jars are missing to be shipped to cluster in scala shell

2020-05-21 Thread GitBox


flinkbot commented on pull request #12288:
URL: https://github.com/apache/flink/pull/12288#issuecomment-632456377


   
   ## CI report:
   
   * 08bc3bf4a17f2e6a99ce5ff3af91ae1341254391 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12287: [FLINK-16077][docs-zh] Translate Custom State Serialization page into Chinese

2020-05-21 Thread GitBox


flinkbot commented on pull request #12287:
URL: https://github.com/apache/flink/pull/12287#issuecomment-632456334


   
   ## CI report:
   
   * 02822f55955ef2c684e724fb254ba641f07d5871 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12078:
URL: https://github.com/apache/flink/pull/12078#issuecomment-626611802


   
   ## CI report:
   
   * e3ffb15cc38bdbbf1f5a11014782d081edaecea6 UNKNOWN
   * d9fd98ae5664303184273f83e4afb0623e6406f0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2020)
 
   * 1142276931ccfd3ab42da2f44369e649c911aafc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


curcur commented on a change in pull request #12269:
URL: https://github.com/apache/flink/pull/12269#discussion_r429018621



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java
##
@@ -262,6 +251,40 @@ public void failJobDueToTaskFailure(Throwable cause, 
ExecutionAttemptID failingT
}
}
 
+   @Test
+   public void testExpiredCheckpointExceedsTolerableFailureNumber() {
+   // create some mock Execution vertices that receive the 
checkpoint trigger messages
+   ExecutionVertex vertex1 = mockExecutionVertex(new 
ExecutionAttemptID());
+   ExecutionVertex vertex2 = mockExecutionVertex(new 
ExecutionAttemptID());
+
+   final String errorMsg = "Exceeded checkpoint failure tolerance 
number!";
+   CheckpointFailureManager checkpointFailureManager = 
getCheckpointFailureManager(errorMsg);
+   CheckpointCoordinator coord = getCheckpointCoordinator(new 
JobID(), vertex1, vertex2, checkpointFailureManager);
+
+   try {
+   // trigger the checkpoint. this should succeed
+   final CompletableFuture 
checkPointFuture = coord.triggerCheckpoint(false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
+   
assertFalse(checkPointFuture.isCompletedExceptionally());
+
+   coord.abortPendingCheckpoints(new 
CheckpointException(CHECKPOINT_EXPIRED));
+
+   fail("Test failed.");
+   }
+   catch (Exception e) {
+   //expected
+   assertTrue(e instanceof RuntimeException);
+   assertEquals(errorMsg, e.getMessage());
+   } finally {
+   try {
+   coord.shutdown(JobStatus.FINISHED);
+   } catch (Exception e) {
+   e.printStackTrace();
+   fail(e.getMessage());

Review comment:
   Ha, I think you mean to throw the exception directly from the test 
function.
   
   Yep, that should also work. It is indeed a bit verbose here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Myasuka commented on pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-21 Thread GitBox


Myasuka commented on pull request #12078:
URL: https://github.com/apache/flink/pull/12078#issuecomment-632455461


   @SteNicholas Thanks for your update. Let's wait for a green build.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


curcur commented on a change in pull request #12269:
URL: https://github.com/apache/flink/pull/12269#discussion_r429018621



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java
##
@@ -262,6 +251,40 @@ public void failJobDueToTaskFailure(Throwable cause, 
ExecutionAttemptID failingT
}
}
 
+   @Test
+   public void testExpiredCheckpointExceedsTolerableFailureNumber() {
+   // create some mock Execution vertices that receive the 
checkpoint trigger messages
+   ExecutionVertex vertex1 = mockExecutionVertex(new 
ExecutionAttemptID());
+   ExecutionVertex vertex2 = mockExecutionVertex(new 
ExecutionAttemptID());
+
+   final String errorMsg = "Exceeded checkpoint failure tolerance 
number!";
+   CheckpointFailureManager checkpointFailureManager = 
getCheckpointFailureManager(errorMsg);
+   CheckpointCoordinator coord = getCheckpointCoordinator(new 
JobID(), vertex1, vertex2, checkpointFailureManager);
+
+   try {
+   // trigger the checkpoint. this should succeed
+   final CompletableFuture 
checkPointFuture = coord.triggerCheckpoint(false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
+   
assertFalse(checkPointFuture.isCompletedExceptionally());
+
+   coord.abortPendingCheckpoints(new 
CheckpointException(CHECKPOINT_EXPIRED));
+
+   fail("Test failed.");
+   }
+   catch (Exception e) {
+   //expected
+   assertTrue(e instanceof RuntimeException);
+   assertEquals(errorMsg, e.getMessage());
+   } finally {
+   try {
+   coord.shutdown(JobStatus.FINISHED);
+   } catch (Exception e) {
+   e.printStackTrace();
+   fail(e.getMessage());

Review comment:
   Ha, I think you mean to throw the exception directly from the function.
   
   Yep, that should also work. It is really a bit verbose here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


curcur commented on a change in pull request #12269:
URL: https://github.com/apache/flink/pull/12269#discussion_r429018621



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java
##
@@ -262,6 +251,40 @@ public void failJobDueToTaskFailure(Throwable cause, 
ExecutionAttemptID failingT
}
}
 
+   @Test
+   public void testExpiredCheckpointExceedsTolerableFailureNumber() {
+   // create some mock Execution vertices that receive the 
checkpoint trigger messages
+   ExecutionVertex vertex1 = mockExecutionVertex(new 
ExecutionAttemptID());
+   ExecutionVertex vertex2 = mockExecutionVertex(new 
ExecutionAttemptID());
+
+   final String errorMsg = "Exceeded checkpoint failure tolerance 
number!";
+   CheckpointFailureManager checkpointFailureManager = 
getCheckpointFailureManager(errorMsg);
+   CheckpointCoordinator coord = getCheckpointCoordinator(new 
JobID(), vertex1, vertex2, checkpointFailureManager);
+
+   try {
+   // trigger the checkpoint. this should succeed
+   final CompletableFuture 
checkPointFuture = coord.triggerCheckpoint(false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
+   
assertFalse(checkPointFuture.isCompletedExceptionally());
+
+   coord.abortPendingCheckpoints(new 
CheckpointException(CHECKPOINT_EXPIRED));
+
+   fail("Test failed.");
+   }
+   catch (Exception e) {
+   //expected
+   assertTrue(e instanceof RuntimeException);
+   assertEquals(errorMsg, e.getMessage());
+   } finally {
+   try {
+   coord.shutdown(JobStatus.FINISHED);
+   } catch (Exception e) {
+   e.printStackTrace();
+   fail(e.getMessage());

Review comment:
   Ha, I think you mean to throw the exception directly from the test 
function.
   
   Yep, that should also work. It is really a bit verbose here.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-17823) Resolve the race condition while releasing RemoteInputChannel

2020-05-21 Thread Zhijiang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijiang resolved FLINK-17823.
--
Resolution: Fixed

Merged in release-1.11: 3eb1075ded64da20e6f7a5bc268f455eaf6573eb

Will merge to master later and update the info.

> Resolve the race condition while releasing RemoteInputChannel
> -
>
> Key: FLINK-17823
> URL: https://issues.apache.org/jira/browse/FLINK-17823
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.11.0
>Reporter: Zhijiang
>Assignee: Zhijiang
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> RemoteInputChannel#releaseAllResources might be called by canceler thread. 
> Meanwhile, the task thread can also call RemoteInputChannel#getNextBuffer. 
> There probably cause two potential problems:
>  * Task thread might get null buffer after canceler thread already released 
> all the buffers, then it might cause misleading NPE in getNextBuffer.
>  * Task thread and canceler thread might pull the same buffer concurrently, 
> which causes unexpected exception when the same buffer is recycled twice.
> The solution is to properly synchronize the buffer queue in release method to 
> avoid the same buffer pulled by both canceler thread and task thread. And in 
> getNextBuffer method, we add some explicit checks to avoid misleading NPE and 
> hint some valid exceptions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-13553) KvStateServerHandlerTest.readInboundBlocking unstable on Travis

2020-05-21 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113713#comment-17113713
 ] 

Dian Fu commented on FLINK-13553:
-

Another instance: 
https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/1990/logs/105

> KvStateServerHandlerTest.readInboundBlocking unstable on Travis
> ---
>
> Key: FLINK-13553
> URL: https://issues.apache.org/jira/browse/FLINK-13553
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Queryable State
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Till Rohrmann
>Assignee: Gary Yao
>Priority: Critical
>  Labels: pull-request-available, test-stability
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{KvStateServerHandlerTest.readInboundBlocking}} and 
> {{KvStateServerHandlerTest.testQueryExecutorShutDown}} fail on Travis with a 
> {{TimeoutException}}.
> https://api.travis-ci.org/v3/job/566420641/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12288: [FLINK-17870]. dependent jars are missing to be shipped to cluster in scala shell

2020-05-21 Thread GitBox


flinkbot commented on pull request #12288:
URL: https://github.com/apache/flink/pull/12288#issuecomment-632453444


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 08bc3bf4a17f2e6a99ce5ff3af91ae1341254391 (Fri May 22 
03:08:07 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-17870).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17730) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart times out

2020-05-21 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113710#comment-17113710
 ] 

Dian Fu commented on FLINK-17730:
-

It seems that it's still happening:
https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_apis/build/builds/2005/logs/96

> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart
>  times out
> 
>
> Key: FLINK-17730
> URL: https://issues.apache.org/jira/browse/FLINK-17730
> Project: Flink
>  Issue Type: Bug
>  Components: Build System / Azure Pipelines, FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Robert Metzger
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0, 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1374=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8
> After 5 minutes 
> {code}
> 2020-05-15T06:56:38.1688341Z "main" #1 prio=5 os_prio=0 
> tid=0x7fa10800b800 nid=0x1161 runnable [0x7fa110959000]
> 2020-05-15T06:56:38.1688709Zjava.lang.Thread.State: RUNNABLE
> 2020-05-15T06:56:38.1689028Z  at 
> java.net.SocketInputStream.socketRead0(Native Method)
> 2020-05-15T06:56:38.1689496Z  at 
> java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
> 2020-05-15T06:56:38.1689921Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:171)
> 2020-05-15T06:56:38.1690316Z  at 
> java.net.SocketInputStream.read(SocketInputStream.java:141)
> 2020-05-15T06:56:38.1690723Z  at 
> sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
> 2020-05-15T06:56:38.1691196Z  at 
> sun.security.ssl.InputRecord.readV3Record(InputRecord.java:593)
> 2020-05-15T06:56:38.1691608Z  at 
> sun.security.ssl.InputRecord.read(InputRecord.java:532)
> 2020-05-15T06:56:38.1692023Z  at 
> sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:975)
> 2020-05-15T06:56:38.1692558Z  - locked <0xb94644f8> (a 
> java.lang.Object)
> 2020-05-15T06:56:38.1692946Z  at 
> sun.security.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:933)
> 2020-05-15T06:56:38.1693371Z  at 
> sun.security.ssl.AppInputStream.read(AppInputStream.java:105)
> 2020-05-15T06:56:38.1694151Z  - locked <0xb9464d20> (a 
> sun.security.ssl.AppInputStream)
> 2020-05-15T06:56:38.1694908Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
> 2020-05-15T06:56:38.1695475Z  at 
> org.apache.http.impl.io.SessionInputBufferImpl.read(SessionInputBufferImpl.java:198)
> 2020-05-15T06:56:38.1696007Z  at 
> org.apache.http.impl.io.ContentLengthInputStream.read(ContentLengthInputStream.java:176)
> 2020-05-15T06:56:38.1696509Z  at 
> org.apache.http.conn.EofSensorInputStream.read(EofSensorInputStream.java:135)
> 2020-05-15T06:56:38.1696993Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1697466Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1698069Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1698567Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699041Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1699624Z  at 
> com.amazonaws.event.ProgressInputStream.read(ProgressInputStream.java:180)
> 2020-05-15T06:56:38.1700090Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1700584Z  at 
> com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:107)
> 2020-05-15T06:56:38.1701282Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1701800Z  at 
> com.amazonaws.services.s3.internal.S3AbortableInputStream.read(S3AbortableInputStream.java:125)
> 2020-05-15T06:56:38.1702328Z  at 
> com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:90)
> 2020-05-15T06:56:38.1702804Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream.lambda$read$3(S3AInputStream.java:445)
> 2020-05-15T06:56:38.1703270Z  at 
> org.apache.hadoop.fs.s3a.S3AInputStream$$Lambda$42/1204178174.execute(Unknown 
> Source)
> 2020-05-15T06:56:38.1703677Z  at 
> org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:109)
> 2020-05-15T06:56:38.1704090Z  at 
> org.apache.hadoop.fs.s3a.Invoker.lambda$retry$3(Invoker.java:260)
> 2020-05-15T06:56:38.1704607Z  at 
> org.apache.hadoop.fs.s3a.Invoker$$Lambda$23/1991724700.execute(Unknown Source)
> 2020-05-15T06:56:38.1705115Z  at 
> 

[jira] [Updated] (FLINK-17870) dependent jars are missing to be shipped to cluster in scala shell

2020-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-17870:
---
Labels: pull-request-available  (was: )

> dependent jars are missing to be shipped to cluster in scala shell
> --
>
> Key: FLINK-17870
> URL: https://issues.apache.org/jira/browse/FLINK-17870
> Project: Flink
>  Issue Type: Bug
>  Components: Scala Shell
>Affects Versions: 1.11.0
>Reporter: Jeff Zhang
>Priority: Major
>  Labels: pull-request-available
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] SteNicholas commented on pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-21 Thread GitBox


SteNicholas commented on pull request #12078:
URL: https://github.com/apache/flink/pull/12078#issuecomment-632452729


   @Myasuka I have already align the test between `emptyValue` and 
`ctx().getOriginal()`, and run `TtlStateTestBase` all test case passed. 
Therefore, very thanks to review again.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zjffdu opened a new pull request #12288: [FLINK-17870]. dependent jars are missing to be shipped to cluster in scala shell

2020-05-21 Thread GitBox


zjffdu opened a new pull request #12288:
URL: https://github.com/apache/flink/pull/12288


   ## What is the purpose of the change
   
   `executeAsync` doesn't add dependent jars which cause ClassNotFound issue. 
This PR does some refactoring and add dependent jars for both `execute` and 
`executeAsync`
   
   ## Verifying this change
   
   Manually tested vis scala-shell
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


curcur commented on a change in pull request #12269:
URL: https://github.com/apache/flink/pull/12269#discussion_r429016185



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java
##
@@ -262,6 +251,40 @@ public void failJobDueToTaskFailure(Throwable cause, 
ExecutionAttemptID failingT
}
}
 
+   @Test
+   public void testExpiredCheckpointExceedsTolerableFailureNumber() {
+   // create some mock Execution vertices that receive the 
checkpoint trigger messages
+   ExecutionVertex vertex1 = mockExecutionVertex(new 
ExecutionAttemptID());
+   ExecutionVertex vertex2 = mockExecutionVertex(new 
ExecutionAttemptID());
+
+   final String errorMsg = "Exceeded checkpoint failure tolerance 
number!";
+   CheckpointFailureManager checkpointFailureManager = 
getCheckpointFailureManager(errorMsg);
+   CheckpointCoordinator coord = getCheckpointCoordinator(new 
JobID(), vertex1, vertex2, checkpointFailureManager);
+
+   try {
+   // trigger the checkpoint. this should succeed
+   final CompletableFuture 
checkPointFuture = coord.triggerCheckpoint(false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
+   
assertFalse(checkPointFuture.isCompletedExceptionally());
+
+   coord.abortPendingCheckpoints(new 
CheckpointException(CHECKPOINT_EXPIRED));
+
+   fail("Test failed.");
+   }
+   catch (Exception e) {
+   //expected
+   assertTrue(e instanceof RuntimeException);
+   assertEquals(errorMsg, e.getMessage());

Review comment:
   Hmm, I think different people have quite different tastes :-)
   
   I am doing it so because the rest of the tests are using the 
`try-catch-fail` style. But I am fine with either way.
   
   Do you insist on this? I can make the change if you do :-)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12287: [FLINK-16077][docs-zh] Translate Custom State Serialization page into Chinese

2020-05-21 Thread GitBox


flinkbot commented on pull request #12287:
URL: https://github.com/apache/flink/pull/12287#issuecomment-632452335


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b34acfd8fbfa30b41475a22c8177302d62a596ec (Fri May 22 
03:02:57 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW merged pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel

2020-05-21 Thread GitBox


zhijiangW merged pull request #12261:
URL: https://github.com/apache/flink/pull/12261


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel

2020-05-21 Thread GitBox


zhijiangW commented on pull request #12261:
URL: https://github.com/apache/flink/pull/12261#issuecomment-632452440


   Thanks for the review @pnowojski and @Jiayi-Liao . Merging!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12261: [FLINK-17823][network] Resolve the race condition while releasing RemoteInputChannel

2020-05-21 Thread GitBox


zhijiangW commented on a change in pull request #12261:
URL: https://github.com/apache/flink/pull/12261#discussion_r429018568



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/io/network/partition/consumer/RemoteInputChannelTest.java
##
@@ -1010,6 +1011,56 @@ public void testConcurrentRecycleAndRelease2() throws 
Exception {
}
}
 
+   @Test
+   public void testConcurrentGetNextBufferAndRelease() throws Exception {

Review comment:
   I thought of some other considerations for this issue to share.
   
   In the ITCase, even though we can reproduce some potential concurrent bugs, 
it is hard to debug and find the root cause, because it is involved in all the 
components. I really have such feeling when debugging the 
`UnalignedCheckpointITCase` these days.
   
   Reversely, unit test only works on two concurrent methods directly, so it is 
easy to find the bugs by limiting the scopes/components. We already had 6 unit 
tests written by concurrent way in `RemoteInputChannelTest` before, to 
guarantee the stability among different concurrent methods executed by task 
thread, netty thread, canceler thread separately.  If replaced by ITCase, we 
need to debug among all these methods to find the potential root cause.
   
   In general, it is better for unit tests only focus on one component or less, 
otherwise we should rely on ITCase. In this case, we only limit the scope 
inside `RemoteInputChannel` component, so it also makes sense from this aspect. 
   
   Anyway besides the pros I mentioned above for unit tests, I also agree that 
the cons you concerned, merely the pros are a bit more than cons on my side. :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


curcur commented on a change in pull request #12269:
URL: https://github.com/apache/flink/pull/12269#discussion_r429018621



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java
##
@@ -262,6 +251,40 @@ public void failJobDueToTaskFailure(Throwable cause, 
ExecutionAttemptID failingT
}
}
 
+   @Test
+   public void testExpiredCheckpointExceedsTolerableFailureNumber() {
+   // create some mock Execution vertices that receive the 
checkpoint trigger messages
+   ExecutionVertex vertex1 = mockExecutionVertex(new 
ExecutionAttemptID());
+   ExecutionVertex vertex2 = mockExecutionVertex(new 
ExecutionAttemptID());
+
+   final String errorMsg = "Exceeded checkpoint failure tolerance 
number!";
+   CheckpointFailureManager checkpointFailureManager = 
getCheckpointFailureManager(errorMsg);
+   CheckpointCoordinator coord = getCheckpointCoordinator(new 
JobID(), vertex1, vertex2, checkpointFailureManager);
+
+   try {
+   // trigger the checkpoint. this should succeed
+   final CompletableFuture 
checkPointFuture = coord.triggerCheckpoint(false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
+   
assertFalse(checkPointFuture.isCompletedExceptionally());
+
+   coord.abortPendingCheckpoints(new 
CheckpointException(CHECKPOINT_EXPIRED));
+
+   fail("Test failed.");
+   }
+   catch (Exception e) {
+   //expected
+   assertTrue(e instanceof RuntimeException);
+   assertEquals(errorMsg, e.getMessage());
+   } finally {
+   try {
+   coord.shutdown(JobStatus.FINISHED);
+   } catch (Exception e) {
+   e.printStackTrace();
+   fail(e.getMessage());

Review comment:
   This is the exception thrown during shutting down in (finally). If we do 
not handle it in the catch, catch would just swallow the exception? Or we can 
rethrow the exception.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-16077) Translate "Custom State Serialization" page into Chinese

2020-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-16077:
---
Labels: pull-request-available  (was: )

> Translate "Custom State Serialization" page into Chinese
> 
>
> Key: FLINK-16077
> URL: https://issues.apache.org/jira/browse/FLINK-16077
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: Yu Li
>Assignee: Congxian Qiu(klion26)
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Complete the translation in `docs/dev/stream/state/custom_serialization.zh.md`



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17870) dependent jars are missing to be shipped to cluster in scala shell

2020-05-21 Thread Jeff Zhang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeff Zhang updated FLINK-17870:
---
Summary: dependent jars are missing to be shipped to cluster in scala shell 
 (was: scala shell jars are missing to be shipped to cluster)

> dependent jars are missing to be shipped to cluster in scala shell
> --
>
> Key: FLINK-17870
> URL: https://issues.apache.org/jira/browse/FLINK-17870
> Project: Flink
>  Issue Type: Bug
>  Components: Scala Shell
>Affects Versions: 1.11.0
>Reporter: Jeff Zhang
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] klion26 opened a new pull request #12287: [FLINK-16077][docs-zh] Translate Custom State Serialization page into Chinese

2020-05-21 Thread GitBox


klion26 opened a new pull request #12287:
URL: https://github.com/apache/flink/pull/12287


   ## What is the purpose of the change
   
   Translate Custom State Serialization page into Chinese
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17771) "PyFlink end-to-end test" fails with "The output result: [] is not as expected: [2, 3, 4]!" on Java11

2020-05-21 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113707#comment-17113707
 ] 

Dian Fu commented on FLINK-17771:
-

Merged via :
master: 8b14cd807d165052da46df2fc0d9536eadc97fe7
release-1.11: c7243c001ba632f412add975a26fe3ae1caff7b2

> "PyFlink end-to-end test" fails with "The output result: [] is not as 
> expected: [2, 3, 4]!" on Java11
> -
>
> Key: FLINK-17771
> URL: https://issues.apache.org/jira/browse/FLINK-17771
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Wei Zhong
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
> Attachments: image-2020-05-21-20-11-07-626.png, 
> image-2020-05-21-20-11-29-389.png, image-2020-05-21-20-11-48-220.png, 
> image-2020-05-21-20-12-16-889.png
>
>
> Java 11 nightly profile: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1579=logs=6caf31d6-847a-526e-9624-468e053467d6=679407b1-ea2c-5965-2c8d-146fff88
> {code}
> Job has been submitted with JobID ef78030becb3bfd6415d3de2e06420b4
> java.lang.AssertionError: The output result: [] is not as expected: [2, 3, 4]!
>   at 
> org.apache.flink.python.tests.FlinkStreamPythonUdfSqlJob.main(FlinkStreamPythonUdfSqlJob.java:55)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:148)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:689)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:227)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:906)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982)
>   at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982)
> Stopping taskexecutor daemon (pid: 2705) on host fv-az670.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17771) "PyFlink end-to-end test" fails with "The output result: [] is not as expected: [2, 3, 4]!" on Java11

2020-05-21 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu closed FLINK-17771.
---
Resolution: Fixed

> "PyFlink end-to-end test" fails with "The output result: [] is not as 
> expected: [2, 3, 4]!" on Java11
> -
>
> Key: FLINK-17771
> URL: https://issues.apache.org/jira/browse/FLINK-17771
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Wei Zhong
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
> Attachments: image-2020-05-21-20-11-07-626.png, 
> image-2020-05-21-20-11-29-389.png, image-2020-05-21-20-11-48-220.png, 
> image-2020-05-21-20-12-16-889.png
>
>
> Java 11 nightly profile: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1579=logs=6caf31d6-847a-526e-9624-468e053467d6=679407b1-ea2c-5965-2c8d-146fff88
> {code}
> Job has been submitted with JobID ef78030becb3bfd6415d3de2e06420b4
> java.lang.AssertionError: The output result: [] is not as expected: [2, 3, 4]!
>   at 
> org.apache.flink.python.tests.FlinkStreamPythonUdfSqlJob.main(FlinkStreamPythonUdfSqlJob.java:55)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:148)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:689)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:227)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:906)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982)
>   at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982)
> Stopping taskexecutor daemon (pid: 2705) on host fv-az670.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-17771) "PyFlink end-to-end test" fails with "The output result: [] is not as expected: [2, 3, 4]!" on Java11

2020-05-21 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-17771:
---

Assignee: Wei Zhong

> "PyFlink end-to-end test" fails with "The output result: [] is not as 
> expected: [2, 3, 4]!" on Java11
> -
>
> Key: FLINK-17771
> URL: https://issues.apache.org/jira/browse/FLINK-17771
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Wei Zhong
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
> Attachments: image-2020-05-21-20-11-07-626.png, 
> image-2020-05-21-20-11-29-389.png, image-2020-05-21-20-11-48-220.png, 
> image-2020-05-21-20-12-16-889.png
>
>
> Java 11 nightly profile: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1579=logs=6caf31d6-847a-526e-9624-468e053467d6=679407b1-ea2c-5965-2c8d-146fff88
> {code}
> Job has been submitted with JobID ef78030becb3bfd6415d3de2e06420b4
> java.lang.AssertionError: The output result: [] is not as expected: [2, 3, 4]!
>   at 
> org.apache.flink.python.tests.FlinkStreamPythonUdfSqlJob.main(FlinkStreamPythonUdfSqlJob.java:55)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>   at 
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>   at 
> org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:198)
>   at 
> org.apache.flink.client.ClientUtils.executeProgram(ClientUtils.java:148)
>   at 
> org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:689)
>   at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:227)
>   at 
> org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:906)
>   at 
> org.apache.flink.client.cli.CliFrontend.lambda$main$10(CliFrontend.java:982)
>   at 
> org.apache.flink.runtime.security.contexts.NoOpSecurityContext.runSecured(NoOpSecurityContext.java:30)
>   at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:982)
> Stopping taskexecutor daemon (pid: 2705) on host fv-az670.
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] dianfu closed pull request #12279: [FLINK-17771][python][e2e] Fix the OOM of the PyFlink end to end test on JDK11.

2020-05-21 Thread GitBox


dianfu closed pull request #12279:
URL: https://github.com/apache/flink/pull/12279


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu edited a comment on pull request #12279: [FLINK-17771][python][e2e] Fix the OOM of the PyFlink end to end test on JDK11.

2020-05-21 Thread GitBox


dianfu edited a comment on pull request #12279:
URL: https://github.com/apache/flink/pull/12279#issuecomment-632449422


   @WeiZhong94 Thanks for the PR. LGTM. I have verified the PR manually and the 
end to end test of PyFlink could pass after applying this PR with Jdk 11 (Also 
verified that it will fail without this PR in JDK 11).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] dianfu commented on pull request #12279: [FLINK-17771][python][e2e] Fix the OOM of the PyFlink end to end test on JDK11.

2020-05-21 Thread GitBox


dianfu commented on pull request #12279:
URL: https://github.com/apache/flink/pull/12279#issuecomment-632449422


   @WeiZhong94 Thanks for the PR. LGTM. I have verified manually the PR and the 
end to end test of PyFlink could pass after applying this PR with Jdk 11 (Also 
verified that it will fail without this PR in JDK 11).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12269: [FLINK-17351] [runtime] Increase `continuousFailureCounter` in `CheckpointFailureManager` for CHECKPOINT_EXPIRED

2020-05-21 Thread GitBox


curcur commented on a change in pull request #12269:
URL: https://github.com/apache/flink/pull/12269#discussion_r429016185



##
File path: 
flink-runtime/src/test/java/org/apache/flink/runtime/checkpoint/CheckpointCoordinatorTest.java
##
@@ -262,6 +251,40 @@ public void failJobDueToTaskFailure(Throwable cause, 
ExecutionAttemptID failingT
}
}
 
+   @Test
+   public void testExpiredCheckpointExceedsTolerableFailureNumber() {
+   // create some mock Execution vertices that receive the 
checkpoint trigger messages
+   ExecutionVertex vertex1 = mockExecutionVertex(new 
ExecutionAttemptID());
+   ExecutionVertex vertex2 = mockExecutionVertex(new 
ExecutionAttemptID());
+
+   final String errorMsg = "Exceeded checkpoint failure tolerance 
number!";
+   CheckpointFailureManager checkpointFailureManager = 
getCheckpointFailureManager(errorMsg);
+   CheckpointCoordinator coord = getCheckpointCoordinator(new 
JobID(), vertex1, vertex2, checkpointFailureManager);
+
+   try {
+   // trigger the checkpoint. this should succeed
+   final CompletableFuture 
checkPointFuture = coord.triggerCheckpoint(false);
+   manuallyTriggeredScheduledExecutor.triggerAll();
+   
assertFalse(checkPointFuture.isCompletedExceptionally());
+
+   coord.abortPendingCheckpoints(new 
CheckpointException(CHECKPOINT_EXPIRED));
+
+   fail("Test failed.");
+   }
+   catch (Exception e) {
+   //expected
+   assertTrue(e instanceof RuntimeException);
+   assertEquals(errorMsg, e.getMessage());

Review comment:
   Hmm, I think different people have quite different tastes :-)
   
   I am doing it so because the rest of the tests are using the 
`try-catch-fail` style. But I am fine with either way? 
   
   Do you insist on this? I can make the change if you do :-)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] TisonKun commented on pull request #12259: [hotfix][k8s] Remove unused constant variable

2020-05-21 Thread GitBox


TisonKun commented on pull request #12259:
URL: https://github.com/apache/flink/pull/12259#issuecomment-632445802


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12286: [FLINK-14592][connector/kafka][test-stability] Use random port for broker creation.

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12286:
URL: https://github.com/apache/flink/pull/12286#issuecomment-632430830


   
   ## CI report:
   
   * bfc52432ddb2fc1ea708230e23f449be23df0370 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2021)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yangyichao-mango commented on a change in pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…

2020-05-21 Thread GitBox


yangyichao-mango commented on a change in pull request #12237:
URL: https://github.com/apache/flink/pull/12237#discussion_r429011221



##
File path: docs/training/streaming_analytics.zh.md
##
@@ -27,125 +27,101 @@ under the License.
 * This will be replaced by the TOC
 {:toc}
 
-## Event Time and Watermarks
+## 事件时间和水印
 
-### Introduction
+### 简介
 
-Flink explicitly supports three different notions of time:
+Flink 明确的支持以下三种事件时间:
 
-* _event time:_ the time when an event occurred, as recorded by the device 
producing (or storing) the event
+* _事件时间:_ 事件产生的时间,记录的是设备生产(或者存储)事件的时间
 
-* _ingestion time:_ a timestamp recorded by Flink at the moment it ingests the 
event
+* _摄取时间:_ Flink 提取事件时记录的时间戳
 
-* _processing time:_ the time when a specific operator in your pipeline is 
processing the event
+* _处理时间:_ Flink 中通过特定的操作处理事件的时间
 
-For reproducible results, e.g., when computing the maximum price a stock 
reached during the first
-hour of trading on a given day, you should use event time. In this way the 
result won't depend on
-when the calculation is performed. This kind of real-time application is 
sometimes performed using
-processing time, but then the results are determined by the events that happen 
to be processed
-during that hour, rather than the events that occurred then. Computing 
analytics based on processing
-time causes inconsistencies, and makes it difficult to re-analyze historic 
data or test new
-implementations.
+为了获得可重现的结果,例如在计算过去的特定一天里第一个小时股票的最高价格时,我们应该使用事件时间。这样的话,无论
+什么时间去计算都不会影响输出结果。然而有些人,在实时计算应用时使用处理时间,这样的话,输出结果就会被处理时间点所决
+定,而不是事件的生成时间。基于处理时间会导致多次计算的结果不一致,也可能会导致重新分析历史数据和测试变得异常困难。
 
-### Working with Event Time
+### 使用事件时间
 
-By default, Flink will use processing time. To change this, you can set the 
Time Characteristic:
+Flink 在默认情况下使用处理时间。也可以通过如下配置来告诉 Flink 选择哪种事件时间:
 
 {% highlight java %}
 final StreamExecutionEnvironment env =
 StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
 {% endhighlight %}
 
-If you want to use event time, you will also need to supply a Timestamp 
Extractor and Watermark
-Generator that Flink will use to track the progress of event time. This will 
be covered in the
-section below on [Working with Watermarks]({% link
-training/streaming_analytics.zh.md %}#working-with-watermarks), but first we 
should explain what
-watermarks are.
+如果想要使用事件时间,则需要额外给 Flink 提供一个时间戳的提取器和水印,Flink 将使用它们来跟踪事件时间的进度。这
+将在选节[使用水印]({% linktutorials/streaming_analytics.md %}#使用水印)中介绍,但是首先我们需要解释一下
+水印是什么。
 
-### Watermarks
+### 水印
 
-Let's work through a simple example that will show why watermarks are needed, 
and how they work.
+让我们通过一个简单的示例来演示,该示例将说明为什么需要水印及其工作方式。
 
-In this example you have a stream of timestamped events that arrive somewhat 
out of order, as shown
-below. The numbers shown are timestamps that indicate when these events 
actually occurred. The first
-event to arrive happened at time 4, and it is followed by an event that 
happened earlier, at time 2,
-and so on:
+在此示例中,我们将看到带有混乱时间戳的事件流,如下所示。显示的数字表达的是这些事件实际发生时间的时间戳。到达的
+第一个事件发生在时间4,随后发生的事件发生在更早的时间2,依此类推:
 
 
 ··· 23 19 22 24 21 14 17 13 12 15 9 11 7 2 4 →
 
 
-Now imagine that you are trying create a stream sorter. This is meant to be an 
application that
-processes each event from a stream as it arrives, and emits a new stream 
containing the same events,
-but ordered by their timestamps.
+假设我们要对数据流排序,我们想要达到的目的是:应用程序应该在数据流里的事件到达时就处理每个事件,并发出包含相同
+事件但按其时间戳排序的新流。
 
-Some observations:
+让我们重新审视这些数据:
 
-(1) The first element your stream sorter sees is the 4, but you can't just 
immediately release it as
-the first element of the sorted stream. It may have arrived out of order, and 
an earlier event might
-yet arrive. In fact, you have the benefit of some god-like knowledge of this 
stream's future, and
-you can see that your stream sorter should wait at least until the 2 arrives 
before producing any
-results.
+(1) 我们的排序器第一个看到的数据是4,但是我们不能立即将其作为已排序流的第一个元素释放。因为我们并不能确定它是
+有序的,并且较早的事件有可能并未到达。事实上,如果站在上帝视角,我们知道,必须要等到2到来时,排序器才可以有事件输出。
 
-*Some buffering, and some delay, is necessary.*
+*需要一些缓冲,需要一些时间,但这都是值得的*
 
-(2) If you do this wrong, you could end up waiting forever. First the sorter 
saw an event from time
-4, and then an event from time 2. Will an event with a timestamp less than 2 
ever arrive? Maybe.
-Maybe not. You could wait forever and never see a 1.
+(2) 接下来的这一步,如果我们选择的是固执的等待,我们永远不会有结果。首先,我们从时间4看到了一个事件,然后从时
+间2看到了一个事件。可是,时间戳小于2的事件接下来会不会到来呢?可能会,也可能不会。再次站在上帝视角,我们知道,我
+们永远不会看到1。
 
-*Eventually you have to be courageous and emit the 2 as the start of the 
sorted stream.*
+*最终,我们必须勇于承担责任,并发出指令,把2作为已排序的事件流的开始*
 
-(3) What you need then is some sort of policy that defines when, for any given 
timestamped event, to
-stop waiting for the arrival of earlier events.
+(3)然后,我们需要一种策略,该策略定义:对于任何给定时间戳的事件,Flink何时停止等待较早事件的到来。
 
-*This is precisely what watermarks do* — they define when to stop waiting for 
earlier events.
+*这正是水印的作用* — 

[GitHub] [flink] Myasuka commented on a change in pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-21 Thread GitBox


Myasuka commented on a change in pull request #12078:
URL: https://github.com/apache/flink/pull/12078#discussion_r429010509



##
File path: 
flink-state-backends/flink-statebackend-rocksdb/src/test/java/org/apache/flink/contrib/streaming/state/ttl/RocksDBTtlStateTestBase.java
##
@@ -41,6 +41,7 @@
 import static 
org.apache.flink.contrib.streaming.state.RocksDBOptions.TTL_COMPACT_FILTER_ENABLED;
 import static org.junit.Assert.assertEquals;
 import static org.junit.Assert.assertNotEquals;
+import static org.junit.Assert.assertTrue;
 
 /** Base test suite for rocksdb state TTL. */
 public abstract class RocksDBTtlStateTestBase extends TtlStateTestBase {

Review comment:
   I noticed the 
[CI](https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2020=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=4ed44b66-cdd6-5dcf-5f6a-88b07dda665d)
 still broken.
   There existed three `assertEquals("Original state should be cleared on 
access", ctx().emptyValue, ctx().getOriginal());` in `TtlStateTestBase.java` 
and I think they are both effected.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yangyichao-mango commented on a change in pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…

2020-05-21 Thread GitBox


yangyichao-mango commented on a change in pull request #12237:
URL: https://github.com/apache/flink/pull/12237#discussion_r427058760



##
File path: docs/training/streaming_analytics.zh.md
##
@@ -27,125 +27,101 @@ under the License.
 * This will be replaced by the TOC
 {:toc}
 
-## Event Time and Watermarks
+## 事件时间和水印
 
-### Introduction
+### 简介
 
-Flink explicitly supports three different notions of time:
+Flink 明确的支持以下三种事件时间:
 
-* _event time:_ the time when an event occurred, as recorded by the device 
producing (or storing) the event
+* _事件时间:_ 事件产生的时间,记录的是设备生产(或者存储)事件的时间
 
-* _ingestion time:_ a timestamp recorded by Flink at the moment it ingests the 
event
+* _摄取时间:_ Flink 提取事件时记录的时间戳
 
-* _processing time:_ the time when a specific operator in your pipeline is 
processing the event
+* _处理时间:_ Flink 中通过特定的操作处理事件的时间
 
-For reproducible results, e.g., when computing the maximum price a stock 
reached during the first
-hour of trading on a given day, you should use event time. In this way the 
result won't depend on
-when the calculation is performed. This kind of real-time application is 
sometimes performed using
-processing time, but then the results are determined by the events that happen 
to be processed
-during that hour, rather than the events that occurred then. Computing 
analytics based on processing
-time causes inconsistencies, and makes it difficult to re-analyze historic 
data or test new
-implementations.
+为了获得可重现的结果,例如在计算过去的特定一天里第一个小时股票的最高价格时,我们应该使用事件时间。这样的话,无论
+什么时间去计算都不会影响输出结果。然而有些人,在实时计算应用时使用处理时间,这样的话,输出结果就会被处理时间点所决
+定,而不是事件的生成时间。基于处理时间会导致多次计算的结果不一致,也可能会导致重新分析历史数据和测试变得异常困难。
 
-### Working with Event Time
+### 使用事件时间
 
-By default, Flink will use processing time. To change this, you can set the 
Time Characteristic:
+Flink 在默认情况下使用处理时间。也可以通过如下配置来告诉 Flink 选择哪种事件时间:
 
 {% highlight java %}
 final StreamExecutionEnvironment env =
 StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
 {% endhighlight %}
 
-If you want to use event time, you will also need to supply a Timestamp 
Extractor and Watermark
-Generator that Flink will use to track the progress of event time. This will 
be covered in the
-section below on [Working with Watermarks]({% link
-training/streaming_analytics.zh.md %}#working-with-watermarks), but first we 
should explain what
-watermarks are.
+如果想要使用事件时间,则需要额外给 Flink 提供一个时间戳的提取器和水印,Flink 将使用它们来跟踪事件时间的进度。这
+将在选节[使用水印]({% linktutorials/streaming_analytics.md %}#使用水印)中介绍,但是首先我们需要解释一下
+水印是什么。
 
-### Watermarks
+### 水印
 
-Let's work through a simple example that will show why watermarks are needed, 
and how they work.
+让我们通过一个简单的示例来演示,该示例将说明为什么需要水印及其工作方式。

Review comment:
   ```suggestion
   让我们通过一个简单的示例来演示为什么需要 watermarks 及其工作方式。
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] yangyichao-mango commented on a change in pull request #12237: [FLINK-17290] [chinese-translation, Documentation / Training] Transla…

2020-05-21 Thread GitBox


yangyichao-mango commented on a change in pull request #12237:
URL: https://github.com/apache/flink/pull/12237#discussion_r427056510



##
File path: docs/training/streaming_analytics.zh.md
##
@@ -27,125 +27,101 @@ under the License.
 * This will be replaced by the TOC
 {:toc}
 
-## Event Time and Watermarks
+## 事件时间和水印
 
-### Introduction
+### 简介
 
-Flink explicitly supports three different notions of time:
+Flink 明确的支持以下三种事件时间:
 
-* _event time:_ the time when an event occurred, as recorded by the device 
producing (or storing) the event
+* _事件时间:_ 事件产生的时间,记录的是设备生产(或者存储)事件的时间
 
-* _ingestion time:_ a timestamp recorded by Flink at the moment it ingests the 
event
+* _摄取时间:_ Flink 提取事件时记录的时间戳
 
-* _processing time:_ the time when a specific operator in your pipeline is 
processing the event
+* _处理时间:_ Flink 中通过特定的操作处理事件的时间
 
-For reproducible results, e.g., when computing the maximum price a stock 
reached during the first
-hour of trading on a given day, you should use event time. In this way the 
result won't depend on
-when the calculation is performed. This kind of real-time application is 
sometimes performed using
-processing time, but then the results are determined by the events that happen 
to be processed
-during that hour, rather than the events that occurred then. Computing 
analytics based on processing
-time causes inconsistencies, and makes it difficult to re-analyze historic 
data or test new
-implementations.
+为了获得可重现的结果,例如在计算过去的特定一天里第一个小时股票的最高价格时,我们应该使用事件时间。这样的话,无论
+什么时间去计算都不会影响输出结果。然而有些人,在实时计算应用时使用处理时间,这样的话,输出结果就会被处理时间点所决
+定,而不是事件的生成时间。基于处理时间会导致多次计算的结果不一致,也可能会导致重新分析历史数据和测试变得异常困难。
 
-### Working with Event Time
+### 使用事件时间
 
-By default, Flink will use processing time. To change this, you can set the 
Time Characteristic:
+Flink 在默认情况下使用处理时间。也可以通过如下配置来告诉 Flink 选择哪种事件时间:
 
 {% highlight java %}
 final StreamExecutionEnvironment env =
 StreamExecutionEnvironment.getExecutionEnvironment();
 env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
 {% endhighlight %}
 
-If you want to use event time, you will also need to supply a Timestamp 
Extractor and Watermark
-Generator that Flink will use to track the progress of event time. This will 
be covered in the
-section below on [Working with Watermarks]({% link
-training/streaming_analytics.zh.md %}#working-with-watermarks), but first we 
should explain what
-watermarks are.
+如果想要使用事件时间,则需要额外给 Flink 提供一个时间戳的提取器和水印,Flink 将使用它们来跟踪事件时间的进度。这
+将在选节[使用水印]({% linktutorials/streaming_analytics.md %}#使用水印)中介绍,但是首先我们需要解释一下
+水印是什么。
 
-### Watermarks
+### 水印

Review comment:
   ```suggestion
   ### Watermarks
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17721) AbstractHadoopFileSystemITTest .cleanupDirectoryWithRetry fails with AssertionError

2020-05-21 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113680#comment-17113680
 ] 

Xintong Song commented on FLINK-17721:
--

cc [~aljoscha] [~pnowojski] [~kkl0u]

> AbstractHadoopFileSystemITTest .cleanupDirectoryWithRetry fails with 
> AssertionError 
> 
>
> Key: FLINK-17721
> URL: https://issues.apache.org/jira/browse/FLINK-17721
> Project: Flink
>  Issue Type: Bug
>  Components: FileSystems, Tests
>Reporter: Robert Metzger
>Assignee: Xintong Song
>Priority: Critical
> Fix For: 1.11.0
>
>
> CI: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=1343=logs=961f8f81-6b52-53df-09f6-7291a2e4af6a=2f99feaa-7a9b-5916-4c1c-5e61f395079e
> {code}
> [ERROR] Tests run: 3, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 34.079 s <<< FAILURE! - in 
> org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase
> [ERROR] org.apache.flink.fs.s3hadoop.HadoopS3FileSystemITCase  Time elapsed: 
> 21.334 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertFalse(Assert.java:64)
>   at org.junit.Assert.assertFalse(Assert.java:74)
>   at 
> org.apache.flink.runtime.fs.hdfs.AbstractHadoopFileSystemITTest.cleanupDirectoryWithRetry(AbstractHadoopFileSystemITTest.java:162)
>   at 
> org.apache.flink.runtime.fs.hdfs.AbstractHadoopFileSystemITTest.teardown(AbstractHadoopFileSystemITTest.java:149)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12286: [FLINK-14592][connector/kafka][test-stability] Use random port for broker creation.

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12286:
URL: https://github.com/apache/flink/pull/12286#issuecomment-632430830


   
   ## CI report:
   
   * bfc52432ddb2fc1ea708230e23f449be23df0370 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2021)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17560) No Slots available exception in Apache Flink Job Manager while Scheduling

2020-05-21 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113678#comment-17113678
 ] 

Xintong Song commented on FLINK-17560:
--

If you can reproduce this issue, it would be helpful to provide the logs.

> No Slots available exception in Apache Flink Job Manager while Scheduling
> -
>
> Key: FLINK-17560
> URL: https://issues.apache.org/jira/browse/FLINK-17560
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.8.3
> Environment: Flink verson 1.8.3
> Session cluster
>Reporter: josson paul kalapparambath
>Priority: Major
>
> Set up
> --
> Flink verson 1.8.3
> Zookeeper HA cluster
> 1 ResourceManager/Dispatcher (Same Node)
> 1 TaskManager
> 4 pipelines running with various parallelism's
> Issue
> --
> Occationally when the Job Manager gets restarted we noticed that all the 
> pipelines are not getting scheduled. The error that is reporeted by the Job 
> Manger is 'not enough slots are available'. This should not be the case 
> because task manager was deployed with sufficient slots for the number of 
> pipelines/parallelism we have.
> We further noticed that the slot report sent by the taskmanger contains solts 
> filled with old CANCELLED job Ids. I am not sure why the task manager still 
> holds the details of the old jobs. Thread dump on the task manager confirms 
> that old pipelines are not running.
> I am aware of https://issues.apache.org/jira/browse/FLINK-12865. But this is 
> not the issue happening in this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17871) Make the default value of attemptFailuresValidityInterval more reasonable

2020-05-21 Thread fanxin (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fanxin updated FLINK-17871:
---
Description: Default value of 
`yarn.application-attempt-failures-validity-interval` is `1` milliseconds 
at present. Usually preparing the context alone can take seconds, which means 
that default value of 1 is too small to even prepare the runtime context. 
With a default config, a flink on yarn job in will hardly meet the condition of 
”fail 2 times in 10s“. If the job has some internal problems, unfortunately, it 
can easily get bogged down in endless retries.  (was: Default value of 
`yarn.application-attempt-failures-validity-interval` is `1` milliseconds 
at present. Usually preparing the context alone can take seconds, which means 
that default value 1 is too small even to ready a runtime context. With a 
default config, a flink on yarn job in will hardly meet the condition of ”fail 
2 times in 10s“. If the job has some internal problems, unfortunately, it can 
easily get bogged down in endless retries.)

> Make the default value of attemptFailuresValidityInterval more reasonable
> -
>
> Key: FLINK-17871
> URL: https://issues.apache.org/jira/browse/FLINK-17871
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: fanxin
>Priority: Minor
>
> Default value of `yarn.application-attempt-failures-validity-interval` is 
> `1` milliseconds at present. Usually preparing the context alone can take 
> seconds, which means that default value of 1 is too small to even prepare 
> the runtime context. With a default config, a flink on yarn job in will 
> hardly meet the condition of ”fail 2 times in 10s“. If the job has some 
> internal problems, unfortunately, it can easily get bogged down in endless 
> retries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] hequn8128 commented on pull request #12246: [FLINK-17303][python] Return TableResult for Python TableEnvironment

2020-05-21 Thread GitBox


hequn8128 commented on pull request #12246:
URL: https://github.com/apache/flink/pull/12246#issuecomment-632431669


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17871) Make the default value of attemptFailuresValidityInterval more reasonable

2020-05-21 Thread fanxin (Jira)
fanxin created FLINK-17871:
--

 Summary: Make the default value of attemptFailuresValidityInterval 
more reasonable
 Key: FLINK-17871
 URL: https://issues.apache.org/jira/browse/FLINK-17871
 Project: Flink
  Issue Type: Improvement
  Components: Deployment / YARN
Reporter: fanxin


Default value of `yarn.application-attempt-failures-validity-interval` is 
`1` milliseconds at present. Usually preparing the context alone can take 
seconds, which means that default value 1 is too small even to ready a 
runtime context. With a default config, a flink on yarn job in will hardly meet 
the condition of ”fail 2 times in 10s“. If the job has some internal problems, 
unfortunately, it can easily get bogged down in endless retries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12286: [FLINK-14592][connector/kafka][test-stability] Use random port for broker creation.

2020-05-21 Thread GitBox


flinkbot commented on pull request #12286:
URL: https://github.com/apache/flink/pull/12286#issuecomment-632430830


   
   ## CI report:
   
   * bfc52432ddb2fc1ea708230e23f449be23df0370 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12078:
URL: https://github.com/apache/flink/pull/12078#issuecomment-626611802


   
   ## CI report:
   
   * e3ffb15cc38bdbbf1f5a11014782d081edaecea6 UNKNOWN
   * d9fd98ae5664303184273f83e4afb0623e6406f0 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2020)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhengcanbin commented on pull request #12259: [hotfix][k8s] Remove unused constant variable

2020-05-21 Thread GitBox


zhengcanbin commented on pull request #12259:
URL: https://github.com/apache/flink/pull/12259#issuecomment-632426867


   cc @TisonKun 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhengcanbin commented on pull request #12277: [FLINK-17230] Fix incorrect returned address of Endpoint for external Service of ClusterIP type

2020-05-21 Thread GitBox


zhengcanbin commented on pull request #12277:
URL: https://github.com/apache/flink/pull/12277#issuecomment-632426186


   cc @TisonKun. Could you help take a look? 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-15778) SQL Client end-to-end test for Kafka 0.10 nightly run hung on travis

2020-05-21 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113665#comment-17113665
 ] 

Jiangjie Qin commented on FLINK-15778:
--

This issue might be related to FLINK-14592 but I am not sure. Let's see if this 
still happen after FLINK-14592 is fixed.

> SQL Client end-to-end test for Kafka 0.10 nightly run hung on travis
> 
>
> Key: FLINK-15778
> URL: https://issues.apache.org/jira/browse/FLINK-15778
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Table SQL / Client
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Yu Li
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.0, 1.10.2
>
>
> The "SQL Client end-to-end test for Kafka 0.10" end-to-end test hung on 
> travis:
> {noformat}
> Waiting for broker...
> Waiting for broker...
> The job exceeded the maximum time limit for jobs, and has been terminated.
> {noformat}
> https://api.travis-ci.org/v3/job/642477196/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12286: [FLINK-14592][connector/kafka][test-stability] Use random port for broker creation.

2020-05-21 Thread GitBox


flinkbot commented on pull request #12286:
URL: https://github.com/apache/flink/pull/12286#issuecomment-632425120


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit bfc52432ddb2fc1ea708230e23f449be23df0370 (Fri May 22 
01:15:19 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin commented on pull request #12286: [FLINK-14592][connector/kafka][test-stability] Use random port for broker creation.

2020-05-21 Thread GitBox


becketqin commented on pull request #12286:
URL: https://github.com/apache/flink/pull/12286#issuecomment-632424729


   @pnowojski Do you have time to take a look? Thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-14592) FlinkKafkaInternalProducerITCase fails with BindException

2020-05-21 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-14592:
---
Labels: pull-request-available test-stability  (was: test-stability)

> FlinkKafkaInternalProducerITCase  fails with BindException
> --
>
> Key: FLINK-14592
> URL: https://issues.apache.org/jira/browse/FLINK-14592
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.10.0
>Reporter: Gary Yao
>Assignee: Jiangjie Qin
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> FlinkKafkaInternalProducerITCase fails with java.net.BindException: Address 
> already in use.
> Logs: https://api.travis-ci.org/v3/job/605478801/log.txt
> {noformat}
> 02:04:04.878 [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time 
> elapsed: 8.822 s <<< FAILURE! - in 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase
> 02:04:04.882 [ERROR] 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase  
> Time elapsed: 8.822 s  <<< ERROR!
> org.apache.kafka.common.KafkaException: Socket server failed to bind to 
> 0.0.0.0:38437: Address already in use.
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.prepare(FlinkKafkaInternalProducerITCase.java:59)
> Caused by: java.net.BindException: Address already in use
>   at 
> org.apache.flink.streaming.connectors.kafka.FlinkKafkaInternalProducerITCase.prepare(FlinkKafkaInternalProducerITCase.java:59)
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] becketqin opened a new pull request #12286: [FLINK-14592][connector/kafka][test-stability] Use random port for broker creation.

2020-05-21 Thread GitBox


becketqin opened a new pull request #12286:
URL: https://github.com/apache/flink/pull/12286


   ## What is the purpose of the change
   Before this patch, Kafka brokers in the IT cases are assigned with port from 
`NetUtils.getAvailablePort()`. This is a little fragile because there is no 
guarantee that the port will not be recycled and used by someone else when the 
broker comes up. There was retries added to avoid the case, but we still see 
`BindExceptions` from time to time.
   
   This patch solve this problem by letting the broker start the listeners with 
random ports.
   
   ## Brief change log
   bfc52432ddb2fc1ea708230e23f449be23df0370 Let the broker pick their own 
random port instead of using `NetUtils.getAvailablePort()`.
   
   ## Verifying this change
   Run the tests repeatedly in Intellij without seeing `BindException` anymore. 
Before the patch the exception is likely to be thrown after 10 - 20 runs.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12078:
URL: https://github.com/apache/flink/pull/12078#issuecomment-626611802


   
   ## CI report:
   
   * e3ffb15cc38bdbbf1f5a11014782d081edaecea6 UNKNOWN
   * 15aed4e77715ccfd4588c2a740b91fc1467c629a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1998)
 
   * d9fd98ae5664303184273f83e4afb0623e6406f0 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2020)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-17870) scala shell jars are missing to be shipped to cluster

2020-05-21 Thread Jeff Zhang (Jira)
Jeff Zhang created FLINK-17870:
--

 Summary: scala shell jars are missing to be shipped to cluster
 Key: FLINK-17870
 URL: https://issues.apache.org/jira/browse/FLINK-17870
 Project: Flink
  Issue Type: Bug
  Components: Scala Shell
Affects Versions: 1.11.0
Reporter: Jeff Zhang






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12078:
URL: https://github.com/apache/flink/pull/12078#issuecomment-626611802


   
   ## CI report:
   
   * e3ffb15cc38bdbbf1f5a11014782d081edaecea6 UNKNOWN
   * 15aed4e77715ccfd4588c2a740b91fc1467c629a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1998)
 
   * d9fd98ae5664303184273f83e4afb0623e6406f0 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #11900:
URL: https://github.com/apache/flink/pull/11900#issuecomment-618914824


   
   ## CI report:
   
   * 69bce2717b0279a894aa66d15cd4b9b72cd5a474 UNKNOWN
   * ccd3b17332d5a7eb5bb97124cd1a9a4c3986d82d Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2019)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas edited a comment on pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-21 Thread GitBox


SteNicholas edited a comment on pull request #12078:
URL: https://github.com/apache/flink/pull/12078#issuecomment-632400538


   @Myasuka Sorry for squash mistake. Please check the `isOriginalEmptyValue()` 
change for `TtlMapStateAllEntriesTestContext`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas commented on pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-21 Thread GitBox


SteNicholas commented on pull request #12078:
URL: https://github.com/apache/flink/pull/12078#issuecomment-632400538


   @Myasuka Sorry for merge mistake. Please check the `isOriginalEmptyValue()` 
change for `TtlMapStateAllEntriesTestContext`.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] SteNicholas removed a comment on pull request #12078: [FLINK-17610][state] Align the behavior of result of internal map state to return empty iterator

2020-05-21 Thread GitBox


SteNicholas removed a comment on pull request #12078:
URL: https://github.com/apache/flink/pull/12078#issuecomment-632082769


   @Myasuka @klion26 @carp84 Please review the changes again, thanks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #11900:
URL: https://github.com/apache/flink/pull/11900#issuecomment-618914824


   
   ## CI report:
   
   * 69bce2717b0279a894aa66d15cd4b9b72cd5a474 UNKNOWN
   * 4cf97b2be4447c2d2f94259ad559fefb79a0a727 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1942)
 
   * ccd3b17332d5a7eb5bb97124cd1a9a4c3986d82d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2019)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #11900:
URL: https://github.com/apache/flink/pull/11900#issuecomment-618914824


   
   ## CI report:
   
   * 69bce2717b0279a894aa66d15cd4b9b72cd5a474 UNKNOWN
   * 4cf97b2be4447c2d2f94259ad559fefb79a0a727 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=1942)
 
   * ccd3b17332d5a7eb5bb97124cd1a9a4c3986d82d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-17560) No Slots available exception in Apache Flink Job Manager while Scheduling

2020-05-21 Thread josson paul kalapparambath (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113567#comment-17113567
 ] 

josson paul kalapparambath edited comment on FLINK-17560 at 5/21/20, 10:54 PM:
---

[~xintongsong]

I am able to reproduce this issue (Not consistently)  if the number of threads 
in the Task Manager is very high. If the number of threads are high on TM and 
restart the Job manager, some times we get into this issue. For me it looks 
like some piece of code is not being executed in the path of 
notifyFinalState(). Some thread contention?. 


was (Author: josson):
[~xintongsong]

I am able to reproduce this issue (Not consistently)  if the number of threads 
in the Task Manager is very high. If the number of threads are high on TM and 
restart the Job manager, some times we get into this issue. For me it looks 
like some piece of code is not executed in the path of notifyFinalState(). Some 
thread contention?. 

> No Slots available exception in Apache Flink Job Manager while Scheduling
> -
>
> Key: FLINK-17560
> URL: https://issues.apache.org/jira/browse/FLINK-17560
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.8.3
> Environment: Flink verson 1.8.3
> Session cluster
>Reporter: josson paul kalapparambath
>Priority: Major
>
> Set up
> --
> Flink verson 1.8.3
> Zookeeper HA cluster
> 1 ResourceManager/Dispatcher (Same Node)
> 1 TaskManager
> 4 pipelines running with various parallelism's
> Issue
> --
> Occationally when the Job Manager gets restarted we noticed that all the 
> pipelines are not getting scheduled. The error that is reporeted by the Job 
> Manger is 'not enough slots are available'. This should not be the case 
> because task manager was deployed with sufficient slots for the number of 
> pipelines/parallelism we have.
> We further noticed that the slot report sent by the taskmanger contains solts 
> filled with old CANCELLED job Ids. I am not sure why the task manager still 
> holds the details of the old jobs. Thread dump on the task manager confirms 
> that old pipelines are not running.
> I am aware of https://issues.apache.org/jira/browse/FLINK-12865. But this is 
> not the issue happening in this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] fpompermaier commented on pull request #11900: [FLINK-17284][jdbc][postgres] Support serial fields

2020-05-21 Thread GitBox


fpompermaier commented on pull request #11900:
URL: https://github.com/apache/flink/pull/11900#issuecomment-632378430


   Rebased again..



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12285: [FLINK-17445][State Processor] Add Scala support for OperatorTransformation

2020-05-21 Thread GitBox


flinkbot edited a comment on pull request #12285:
URL: https://github.com/apache/flink/pull/12285#issuecomment-632360179


   
   ## CI report:
   
   * 754d93703e9ecf3043b9bf57121d1636a3a4c167 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2018)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] opnarius commented on pull request #7226: FLINK-11050 add lowerBound and upperBound for optimizing RocksDBMapState's entries

2020-05-21 Thread GitBox


opnarius commented on pull request #7226:
URL: https://github.com/apache/flink/pull/7226#issuecomment-632371433


   Hello, any more thoughts on implementing ordering and filtering of MapState? 
This would really boost the inner join performance. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17560) No Slots available exception in Apache Flink Job Manager while Scheduling

2020-05-21 Thread josson paul kalapparambath (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17113567#comment-17113567
 ] 

josson paul kalapparambath commented on FLINK-17560:


[~xintongsong]

I am able to reproduce this issue (Not consistently)  if the number of threads 
in the Task Manager is very high. If the number of threads are high on TM and 
restart the Job manager, some times we get into this issue. For me it looks 
like some piece of code is not executed in the path of notifyFinalState(). Some 
thread contention?. 

> No Slots available exception in Apache Flink Job Manager while Scheduling
> -
>
> Key: FLINK-17560
> URL: https://issues.apache.org/jira/browse/FLINK-17560
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.8.3
> Environment: Flink verson 1.8.3
> Session cluster
>Reporter: josson paul kalapparambath
>Priority: Major
>
> Set up
> --
> Flink verson 1.8.3
> Zookeeper HA cluster
> 1 ResourceManager/Dispatcher (Same Node)
> 1 TaskManager
> 4 pipelines running with various parallelism's
> Issue
> --
> Occationally when the Job Manager gets restarted we noticed that all the 
> pipelines are not getting scheduled. The error that is reporeted by the Job 
> Manger is 'not enough slots are available'. This should not be the case 
> because task manager was deployed with sufficient slots for the number of 
> pipelines/parallelism we have.
> We further noticed that the slot report sent by the taskmanger contains solts 
> filled with old CANCELLED job Ids. I am not sure why the task manager still 
> holds the details of the old jobs. Thread dump on the task manager confirms 
> that old pipelines are not running.
> I am aware of https://issues.apache.org/jira/browse/FLINK-12865. But this is 
> not the issue happening in this case.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12285: [FLINK-17445][State Processor] Add Scala support for OperatorTransformation

2020-05-21 Thread GitBox


flinkbot commented on pull request #12285:
URL: https://github.com/apache/flink/pull/12285#issuecomment-632360179


   
   ## CI report:
   
   * 754d93703e9ecf3043b9bf57121d1636a3a4c167 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >