[jira] [Created] (FLINK-18270) ResumeCheckpointManuallyITCase.testExternalizedFSCheckpointsWithLocalRecoveryZookeeper unstable

2020-06-11 Thread Robert Metzger (Jira)
Robert Metzger created FLINK-18270:
--

 Summary: 
ResumeCheckpointManuallyITCase.testExternalizedFSCheckpointsWithLocalRecoveryZookeeper
 unstable
 Key: FLINK-18270
 URL: https://issues.apache.org/jira/browse/FLINK-18270
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing, Tests
Affects Versions: 1.11.0
Reporter: Robert Metzger


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3343=logs=baf26b34-3c6a-54e8-f93f-cf269b32f802=d6363642-ea4a-5c73-7edb-c00d4548b58e

{code}
2020-06-11T21:52:32.3543580Z [ERROR] Tests run: 12, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 127.499 s <<< FAILURE! - in 
org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase
2020-06-11T21:52:32.3546317Z [ERROR] 
testExternalizedFSCheckpointsWithLocalRecoveryZookeeper(org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase)
  Time elapsed: 19.49 s  <<< ERROR!
2020-06-11T21:52:32.3548424Z 
org.apache.flink.client.program.ProgramInvocationException: Could not run job 
in detached mode. (JobID: 97cf874c9cb8a81ec15c0b8acb470494)
2020-06-11T21:52:32.3550286Zat 
org.apache.flink.client.ClientUtils.submitJob(ClientUtils.java:89)
2020-06-11T21:52:32.3551942Zat 
org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.runJobAndGetExternalizedCheckpoint(ResumeCheckpointManuallyITCase.java:298)
2020-06-11T21:52:32.3554031Zat 
org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.testExternalizedCheckpoints(ResumeCheckpointManuallyITCase.java:284)
2020-06-11T21:52:32.3556326Zat 
org.apache.flink.test.checkpointing.ResumeCheckpointManuallyITCase.testExternalizedFSCheckpointsWithLocalRecoveryZookeeper(ResumeCheckpointManuallyITCase.java:225)
2020-06-11T21:52:32.3558357Zat 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-06-11T21:52:32.3559639Zat 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-06-11T21:52:32.3561162Zat 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-06-11T21:52:32.3562366Zat 
java.lang.reflect.Method.invoke(Method.java:498)
2020-06-11T21:52:32.3563625Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-06-11T21:52:32.3565041Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-06-11T21:52:32.358Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-06-11T21:52:32.3568105Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-06-11T21:52:32.3569476Zat 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
2020-06-11T21:52:32.3570935Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-06-11T21:52:32.3572328Zat 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
2020-06-11T21:52:32.3573585Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
2020-06-11T21:52:32.3574972Zat 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
2020-06-11T21:52:32.3576385Zat 
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
2020-06-11T21:52:32.3577570Zat 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
2020-06-11T21:52:32.3578808Zat 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
2020-06-11T21:52:32.3580233Zat 
org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
2020-06-11T21:52:32.3581458Zat 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
2020-06-11T21:52:32.3582693Zat 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:48)
2020-06-11T21:52:32.3583893Zat 
org.junit.rules.RunRules.evaluate(RunRules.java:20)
2020-06-11T21:52:32.3584958Zat 
org.junit.runners.ParentRunner.run(ParentRunner.java:363)
2020-06-11T21:52:32.3586371Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
2020-06-11T21:52:32.3587821Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
2020-06-11T21:52:32.3589274Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
2020-06-11T21:52:32.3590928Zat 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
2020-06-11T21:52:32.3592436Zat 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
2020-06-11T21:52:32.3593996Zat 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
2020-06-11T21:52:32.3595420Zat 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
2020-06-11T21:52:32.3596919Zat 

[GitHub] [flink] JingsongLi commented on pull request #12610: [FLINK-17686][doc] Add document to dataGen, print, blackhole connectors

2020-06-11 Thread GitBox


JingsongLi commented on pull request #12610:
URL: https://github.com/apache/flink/pull/12610#issuecomment-643080150


   Thanks @sjwiesman @godfreyhe @danny0405 for your review, updated.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-web] klion26 commented on a change in pull request #247: [FLINK-13683] Translate "Code Style - Component Guide" page into Chinese

2020-06-11 Thread GitBox


klion26 commented on a change in pull request #247:
URL: https://github.com/apache/flink-web/pull/247#discussion_r439216882



##
File path: contributing/code-style-and-quality-components.zh.md
##
@@ -48,96 +47,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many 
aspects of threading, concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点等许多方面。
 
-As part of 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
 we are working on making this much simpler for sources. New sources should not 
have to deal with any aspect of concurrency/threading and checkpointing any 
more.
+作为 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
 的一部分,我们正在努力实现数据源(source)。新的数据源应该不必处理并发/线程和检查点的任何方面。

Review comment:
   `作为 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
 的一部分,我们正在努力实现数据源(source)` 这句话感觉不完整,正在实现数据源,然后就没了,和原文也有一点不太一样,原文是说使得 source 
的实现更简单了

##
File path: contributing/code-style-and-quality-components.zh.md
##
@@ -9,24 +9,23 @@ title:  "Apache Flink Code Style and Quality Guide  — 
Components"
 
 
 
-## Component Specific Guidelines
+## 组件特定指南
 
-_Additional guidelines about changes in specific components._
+_关于特定组件更改的附加指南。_
 
 
-### Configuration Changes
+### 配置更改
 
-Where should the config option go?
+配置选项应该放在哪里?
 
-* ‘flink-conf.yaml’: All 
configuration that pertains to execution behavior that one may want to 
standardize across jobs. Think of it as parameters someone would set wearing an 
“ops” hat, or someone that provides a stream processing platform to other teams.
+* ‘flink-conf.yaml’: 
所有属于可能要跨作业标准化的执行行为配置。可以将其想像成 Ops 的工作人员,或为其他团队提供流处理平台的人。

Review comment:
   我重新看了下。这里的意思是说 “flink-conf” 里面的参数是 ops 人员设置的吧,不是说这个是 ops 工作人员?
   这里是说  flink-conf 是 ops 工作人员或者提供平台的工作人员会统一设置的参数?
   可以想下如何描述会更好~





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12627: [FLINK-18083][hbase] Improve exception message of TIMESTAMP/TIME out of the HBase connector supported precision

2020-06-11 Thread GitBox


flinkbot commented on pull request #12627:
URL: https://github.com/apache/flink/pull/12627#issuecomment-643073650


   
   ## CI report:
   
   * 62a91a8087b1aa7d6a68d40a40c8473664dfa170 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12527: [FLINK-18173][build] Bundle flink-csv,flink-json,flink-avro jars in lib

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12527:
URL: https://github.com/apache/flink/pull/12527#issuecomment-640564258


   
   ## CI report:
   
   * a6add24c7d5a479f5ad83bf25f3086881b69cad7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3293)
 
   * 7ea595432ea5af30e292a6538eb27b6de6eeef1d Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3362)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12622: [FLINK-18261][parquet][orc] flink-orc and flink-parquet have invalid NOTICE file

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12622:
URL: https://github.com/apache/flink/pull/12622#issuecomment-643039828


   
   ## CI report:
   
   * b12eb7d44f3aa147a8580cd3fdeda71290dc6efb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3355)
 
   * 43d37e249d4877a4c65e538f32143a3b3c41e22f Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3363)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17752) Align the timestamp format with Flink SQL types in JSON format

2020-06-11 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-17752:

Description: 
Currently, we are using RFC3339_TIMESTAMP_FORMAT (which will add timezone at 
the end of string) to as the timestamp format in JSON. However, the string 
representation fo {{TIMESTAMP (WITHOUT TIME ZONE)}} shoudn't adding 'Z' at the 
end. 

Previous discussion: 
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/TIME-TIMESTAMP-parse-in-Flink-TABLE-SQL-API-td33061.html#a33095

  was:Currently, we are using RFC3339_TIMESTAMP_FORMAT (which will add timezone 
at the end of string) to as the timestamp format in JSON. However, the string 
representation fo {{TIMESTAMP (WITHOUT TIME ZONE)}} shoudn't adding 'Z' at the 
end. 


> Align the timestamp format with Flink SQL types in JSON format
> --
>
> Key: FLINK-17752
> URL: https://issues.apache.org/jira/browse/FLINK-17752
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: Shengkai Fang
>Priority: Critical
> Fix For: 1.11.0
>
>
> Currently, we are using RFC3339_TIMESTAMP_FORMAT (which will add timezone at 
> the end of string) to as the timestamp format in JSON. However, the string 
> representation fo {{TIMESTAMP (WITHOUT TIME ZONE)}} shoudn't adding 'Z' at 
> the end. 
> Previous discussion: 
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/TIME-TIMESTAMP-parse-in-Flink-TABLE-SQL-API-td33061.html#a33095



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18245) Support to parse -1 for MemorySize and Duration ConfigOption

2020-06-11 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133936#comment-17133936
 ] 

Xintong Song commented on FLINK-18245:
--

I'm okay with having such disabled values. However, I want to bring the 
following things to attention.
 * A {{MemorySize}} should never be negative. We rely on this assumption for 
sanity checks in many places in memory calculations. If we introduce the 
disabled values, it would be good that {{Configuration}} recognizes such 
values, and returns {{defaultValue}} / {{Optional.empty()}} when {{get()}} / 
{{getOptional()}} is called on them.
 * We also rely on {{Configuration.contains}} & {{Configuration.containsKey}} 
to decide whether a configuration option is specified or not. If we introduce 
the disabled values, it would be good that these two methods returns {{false}} 
on such disabled values.

> Support to parse -1 for MemorySize and Duration ConfigOption
> 
>
> Key: FLINK-18245
> URL: https://issues.apache.org/jira/browse/FLINK-18245
> Project: Flink
>  Issue Type: New Feature
>  Components: API / Core
>Reporter: Jark Wu
>Priority: Major
>
> Currently, MemorySize and Duration ConfigOption doesn't support to parse 
> {{-1}} or {{-1s}}. 
> {code:java}
> java.lang.NumberFormatException: text does not start with a number
>   at 
> org.apache.flink.configuration.MemorySize.parseBytes(MemorySize.java:294)
> {code}
> That makes us can't to use {{-1}} as a disabled value, and have to use {{0}} 
> which may confuse users at some senarios. 
> There is some discussion around this topic in 
> :https://github.com/apache/flink/pull/12536#discussion_r438019632



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12627: [FLINK-18083][hbase] Improve exception message of TIMESTAMP/TIME out of the HBase connector supported precision

2020-06-11 Thread GitBox


flinkbot commented on pull request #12627:
URL: https://github.com/apache/flink/pull/12627#issuecomment-643068457


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 62a91a8087b1aa7d6a68d40a40c8473664dfa170 (Fri Jun 12 
05:10:56 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18083) Improve exception message of TIMESTAMP/TIME out of the HBase connector supported precision

2020-06-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18083:
---
Labels: pull-request-available  (was: )

> Improve exception message of TIMESTAMP/TIME  out of the HBase connector 
> supported precision
> ---
>
> Key: FLINK-18083
> URL: https://issues.apache.org/jira/browse/FLINK-18083
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12622: [FLINK-18261][parquet][orc] flink-orc and flink-parquet have invalid NOTICE file

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12622:
URL: https://github.com/apache/flink/pull/12622#issuecomment-643039828


   
   ## CI report:
   
   * b12eb7d44f3aa147a8580cd3fdeda71290dc6efb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3355)
 
   * 43d37e249d4877a4c65e538f32143a3b3c41e22f UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12625: [FLINK-18197][table-runtime-blink][hive] Add more logs for hive strea…

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12625:
URL: https://github.com/apache/flink/pull/12625#issuecomment-643058855


   
   ## CI report:
   
   * f1120637f253f329b5c38d43d27d3d497df248a9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3360)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12626: [hotfix] fix doc link error in avro.zh.md

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12626:
URL: https://github.com/apache/flink/pull/12626#issuecomment-643058889


   
   ## CI report:
   
   * 7602fe9e2a50fc5753015e68c54468086181d2d1 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3361)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12619: [FLINK-17623][Connectors/Elasticsearch] Support user resource cleanup in Elasticsearch sink

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12619:
URL: https://github.com/apache/flink/pull/12619#issuecomment-643008650


   
   ## CI report:
   
   * a73bb3685261f7e63cb062190eb5ae7a3e68739a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3347)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang opened a new pull request #12627: [FLINK-18083][hbase] Improve exception message of TIMESTAMP/TIME out of the HBase connector supported precision

2020-06-11 Thread GitBox


leonardBang opened a new pull request #12627:
URL: https://github.com/apache/flink/pull/12627


   ## What is the purpose of the change
   
   * This pull request Improve exception message of TIMESTAMP/TIME out of the 
HBase connector supported precision, HBase connector only supports precision 
range[0, 3].
   
   
   ## Brief change log
   
* update file org/apache/flink/connector/hbase/util/HBaseTypeUtils.java
* update file org/apache/flink/connector/hbase/util/HBaseSerde.java
   
   
   ## Verifying this change
   
* Add unit tests in `HBaseTableFactoryTest.java`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12605: [FLINK-18242][state-backend-rocksdb] Separate RocksDBOptionsFactory from OptionsFactory

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12605:
URL: https://github.com/apache/flink/pull/12605#issuecomment-642611275


   
   ## CI report:
   
   * 9272277477abe581fa02865d57ee929084e0f948 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3317)
 
   * 24d6b9be546ae19434f9f6a3c3eed0209e4e9928 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3359)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12527: [FLINK-18173][build] Bundle flink-csv,flink-json,flink-avro jars in lib

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12527:
URL: https://github.com/apache/flink/pull/12527#issuecomment-640564258


   
   ## CI report:
   
   * a6add24c7d5a479f5ad83bf25f3086881b69cad7 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3293)
 
   * 7ea595432ea5af30e292a6538eb27b6de6eeef1d UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18083) Improve exception message of TIMESTAMP/TIME out of the HBase connector supported precision

2020-06-11 Thread Leonard Xu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Leonard Xu updated FLINK-18083:
---
Summary: Improve exception message of TIMESTAMP/TIME  out of the HBase 
connector supported precision  (was: DDL TIMESTAMP(9) defined on HBase table 
lost nano seconds )

> Improve exception message of TIMESTAMP/TIME  out of the HBase connector 
> supported precision
> ---
>
> Key: FLINK-18083
> URL: https://issues.apache.org/jira/browse/FLINK-18083
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
> Fix For: 1.11.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12625: [FLINK-18197][table-runtime-blink][hive] Add more logs for hive strea…

2020-06-11 Thread GitBox


flinkbot commented on pull request #12625:
URL: https://github.com/apache/flink/pull/12625#issuecomment-643058855


   
   ## CI report:
   
   * f1120637f253f329b5c38d43d27d3d497df248a9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12626: [hotfix] fix doc link error in avro.zh.md

2020-06-11 Thread GitBox


flinkbot commented on pull request #12626:
URL: https://github.com/apache/flink/pull/12626#issuecomment-643058889


   
   ## CI report:
   
   * 7602fe9e2a50fc5753015e68c54468086181d2d1 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12624: [FLINK-18205] Mitigate the use of reflection in Utils and HadoopModule

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12624:
URL: https://github.com/apache/flink/pull/12624#issuecomment-643054642


   
   ## CI report:
   
   * 7781df7876d274f3598e0620e007e9a362e2b6cf Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3358)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12605: [FLINK-18242][state-backend-rocksdb] Separate RocksDBOptionsFactory from OptionsFactory

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12605:
URL: https://github.com/apache/flink/pull/12605#issuecomment-642611275


   
   ## CI report:
   
   * 9272277477abe581fa02865d57ee929084e0f948 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3317)
 
   * 24d6b9be546ae19434f9f6a3c3eed0209e4e9928 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12626: [hotfix] fix doc link error in avro.zh.md

2020-06-11 Thread GitBox


flinkbot commented on pull request #12626:
URL: https://github.com/apache/flink/pull/12626#issuecomment-643056881


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7602fe9e2a50fc5753015e68c54468086181d2d1 (Fri Jun 12 
04:22:48 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on pull request #12626: [hotfix] fix doc link error in avro.zh.md

2020-06-11 Thread GitBox


KarmaGYZ commented on pull request #12626:
URL: https://github.com/apache/flink/pull/12626#issuecomment-643056528


   cc @wuchong 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-18269) ContinuousFileReaderOperator also supports monitoring subdirectories

2020-06-11 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao closed FLINK-18269.
---
Resolution: Not A Problem

> ContinuousFileReaderOperator also supports monitoring subdirectories
> 
>
> Key: FLINK-18269
> URL: https://issues.apache.org/jira/browse/FLINK-18269
> Project: Flink
>  Issue Type: Improvement
>  Components: Connectors / FileSystem
>Reporter: Yun Gao
>Priority: Major
>
> Currently ContinuousFileReaderOperator only supports monitoring the files 
> directly under the given path. However, if the source directory is created by 
> sinks supporting bucket, it will contains subdirectories and cannot be 
> monitored.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ opened a new pull request #12626: [hotfix] fix doc link error in avro.zh.md

2020-06-11 Thread GitBox


KarmaGYZ opened a new pull request #12626:
URL: https://github.com/apache/flink/pull/12626


   
   
   ## What is the purpose of the change
   
   Fix doc link error in avro.zh.md
   
   ## Brief change log
   
   Fix doc link error in avro.zh.md
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439197606



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -702,11 +714,14 @@ private void disposeAllOperators(boolean logOnlyErrors) 
throws Exception {
operator.dispose();
}
catch (Exception e) {

Review comment:
   Hmm... Yes a little bit, is it safe to make the change since all the 
functions are throwing exceptions in this class.
   
   I kind of feeling non-consistent way of using Throwable and Exception in 
this class, but I guess it is out-of-scope of this fix?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12625: [FLINK-18197][table-runtime-blink][hive] Add more logs for hive strea…

2020-06-11 Thread GitBox


flinkbot commented on pull request #12625:
URL: https://github.com/apache/flink/pull/12625#issuecomment-643055699


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit f1120637f253f329b5c38d43d27d3d497df248a9 (Fri Jun 12 
04:17:30 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
* **This pull request references an unassigned [Jira 
ticket](https://issues.apache.org/jira/browse/FLINK-18197).** According to the 
[code contribution 
guide](https://flink.apache.org/contributing/contribute-code.html), tickets 
need to be assigned before starting with the implementation work.
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439197606



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -702,11 +714,14 @@ private void disposeAllOperators(boolean logOnlyErrors) 
throws Exception {
operator.dispose();
}
catch (Exception e) {

Review comment:
   Hmm... Yes a little bit, is it safe to make the change? I think so.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18197) Add more logs for hive streaming integration

2020-06-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18197:
---
Labels: pull-request-available  (was: )

> Add more logs for hive streaming integration
> 
>
> Key: FLINK-18197
> URL: https://issues.apache.org/jira/browse/FLINK-18197
> Project: Flink
>  Issue Type: Task
>  Components: Connectors / FileSystem, Connectors / Hive
>Reporter: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> Should add more logs for Hive table streaming source/sink and lookup join.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache opened a new pull request #12625: [FLINK-18197][table-runtime-blink][hive] Add more logs for hive strea…

2020-06-11 Thread GitBox


lirui-apache opened a new pull request #12625:
URL: https://github.com/apache/flink/pull/12625


   …ming integration
   
   
   
   ## What is the purpose of the change
   
   Add more logs for Hive streaming integration. It's helpful for debugging 
when something goes wrong.
   
   
   ## Brief change log
   
 - Log when lookup join cache is reloaded.
 - Log when new partitions are found in streaming source
 - Log when new partitions are committed in streaming sink
   
   
   ## Verifying this change
   
   Manually verified the logs.
   
   ## Does this pull request potentially affect one of the following parts:
   
   NA
   
   ## Documentation
   
   NA
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439197491



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -613,25 +614,31 @@ protected void cleanUpInvoke() throws Exception {
// stop all timers and threads
tryShutdownTimerService();
 
+   Throwable suppressedThrowable = null;

Review comment:
   > Maybe we can emulate it by having a collection of closeables and close 
them in a loop:
   > 
   > ```
   > List> runs = asList(cancelables::close, 
this::shutdownAsyncThreads, this::cleanup, this::disposeAllOperators, ...);
   > Throwable suppressedThrowable = null;
   > for (ThrowingRunnable run: runs) {
   > try {
   > run.run();
   > } catch (Throwable t) {
   > suppressedThrowable = ExceptionUtils.firstOrSuppressed(t, 
suppressedThrowable);
   > }
   > }
   > ```
   
   yep, that's a good way, if we decide to suppress every exception.
   
   I do not like the way we have so many try-catch blocks as well :-)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW commented on a change in pull request #12617: [FLINK-18252][checkpointing] Fix savepoint overtaking output data.

2020-06-11 Thread GitBox


zhijiangW commented on a change in pull request #12617:
URL: https://github.com/apache/flink/pull/12617#discussion_r439197490



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/SubtaskCheckpointCoordinatorImpl.java
##
@@ -253,7 +249,7 @@ public void checkpointState(
// Step (2): Send the checkpoint barrier downstream
operatorChain.broadcastEvent(
new CheckpointBarrier(metadata.getCheckpointId(), 
metadata.getTimestamp(), options),
-   unalignedCheckpointEnabled);
+   options.isUnalignedCheckpoint());

Review comment:
   nit: might seem more consistent if we either remove below 
`#includeChannelState` method or use that method in all the places.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12624: [FLINK-18205] Mitigate the use of reflection in Utils and HadoopModule

2020-06-11 Thread GitBox


flinkbot commented on pull request #12624:
URL: https://github.com/apache/flink/pull/12624#issuecomment-643054642


   
   ## CI report:
   
   * 7781df7876d274f3598e0620e007e9a362e2b6cf UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12621: [FLINK-16976][docs-zh] Update chinese documentation for ListCheckpoin…

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12621:
URL: https://github.com/apache/flink/pull/12621#issuecomment-643039777


   
   ## CI report:
   
   * 9c04ca36234101ce6b0886a062dcc16977834a96 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3354)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12598: [FLINK-18246][python][e2e] Disable PyFlink e2e tests when running on jdk11 to avoid the UnsupportedClassVersionError.

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12598:
URL: https://github.com/apache/flink/pull/12598#issuecomment-642529967


   
   ## CI report:
   
   * a8010170fae4777d9cd60b65b737c71f86e33a8a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3281)
 
   * 165e2965fc4b03f95bc0108200360b52ca4898c6 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3357)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-18269) ContinuousFileReaderOperator also supports monitoring subdirectories

2020-06-11 Thread Yun Gao (Jira)
Yun Gao created FLINK-18269:
---

 Summary: ContinuousFileReaderOperator also supports monitoring 
subdirectories
 Key: FLINK-18269
 URL: https://issues.apache.org/jira/browse/FLINK-18269
 Project: Flink
  Issue Type: Improvement
  Components: Connectors / FileSystem
Reporter: Yun Gao


Currently ContinuousFileReaderOperator only supports monitoring the files 
directly under the given path. However, if the source directory is created by 
sinks supporting bucket, it will contains subdirectories and cannot be 
monitored.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhuzhurk removed a comment on pull request #12256: [FLINK-17018][runtime] Allocates slots in bulks for pipelined region scheduling

2020-06-11 Thread GitBox


zhuzhurk removed a comment on pull request #12256:
URL: https://github.com/apache/flink/pull/12256#issuecomment-643007617


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhuzhurk commented on pull request #12256: [FLINK-17018][runtime] Allocates slots in bulks for pipelined region scheduling

2020-06-11 Thread GitBox


zhuzhurk commented on pull request #12256:
URL: https://github.com/apache/flink/pull/12256#issuecomment-643052813


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439194857



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -613,25 +614,31 @@ protected void cleanUpInvoke() throws Exception {
// stop all timers and threads
tryShutdownTimerService();
 
+   Throwable suppressedThrowable = null;

Review comment:
   > I guess we can't use Guava `Closer` because we want to handle any type 
of exception, right?
   > And not CloseableRegistry because it closes silently.
   
   Hmm, need more context to understand this part.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18205) Mitigate the use of reflection in HadoopModule and Utils

2020-06-11 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133889#comment-17133889
 ] 

Yangze Guo edited comment on FLINK-18205 at 6/12/20, 3:58 AM:
--

FYI, I also found the same usage in org.apache.flink.yarn.Utils. I plan to 
mitigate the usage of reflection of that class in this ticket as well.


was (Author: karmagyz):
FYI, I also found the same usage in org.apache.flink.yarn.Utils. I plan to 
mitigate the usage of reflection in that class as well.

> Mitigate the use of reflection in HadoopModule and Utils
> 
>
> Key: FLINK-18205
> URL: https://issues.apache.org/jira/browse/FLINK-18205
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Since Flink drops support for Hadoop 1 since FLINK-4895, it would be good to 
> mitigate the use of reflection in {{HadoopModule}}. To be specific, we could 
> make sure the following methods exist in Hadoop 2+:
> - Credentials#getAllTokens
> - Credentials#readTokenStorageFile
> - UserGroupInformation#addCredentials



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18266) 在表中使用proctime时,如果sql中不对强转,任务输出有问题

2020-06-11 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133899#comment-17133899
 ] 

Benchao Li commented on FLINK-18266:


[~liqi316] Please use english in Jira issue.

> 在表中使用proctime时,如果sql中不对强转,任务输出有问题
> -
>
> Key: FLINK-18266
> URL: https://issues.apache.org/jira/browse/FLINK-18266
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Table SQL / Planner
>Affects Versions: 1.9.1
>Reporter: liqi316
>Priority: Trivial
>
> 当注册表的时候,指定了proctime,但是在select * from talbe 的时候会报类型无法转换。
> Caused by: java.lang.ClassCastException: java.time.LocalDateTime cannot be 
> cast to java.lang.LongCaused by: java.lang.ClassCastException: 
> java.time.LocalDateTime cannot be cast to java.lang.Long at 
> org.apache.flink.api.common.typeutils.base.LongSerializer.copy(LongSerializer.java:32)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:93)
>  at 
> org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:44)
>  at 
> org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:635)
>  ... 51 more
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18268) Correct Table API in Temporal table docs

2020-06-11 Thread Leonard Xu (Jira)
Leonard Xu created FLINK-18268:
--

 Summary: Correct Table API in Temporal table docs
 Key: FLINK-18268
 URL: https://issues.apache.org/jira/browse/FLINK-18268
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.11.0
Reporter: Leonard Xu
 Fix For: 1.11.0


*see user's feedback:*

*[http://apache-flink.147419.n8.nabble.com/flink-TableEnvironment-can-not-call-getTableEnvironment-api-tt3871.html]*



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18267) Missing NOTICE file in flink-examples-streaming

2020-06-11 Thread Yangze Guo (Jira)
Yangze Guo created FLINK-18267:
--

 Summary: Missing NOTICE file in flink-examples-streaming
 Key: FLINK-18267
 URL: https://issues.apache.org/jira/browse/FLINK-18267
 Project: Flink
  Issue Type: Bug
  Components: Examples
Affects Versions: 1.11.0
Reporter: Yangze Guo
 Fix For: 1.11.0, 1.12.0


Add missing NOTICE file in flink-examples-streaming since we introduce new 
dependencies for {{jcuda}} and {{jcublas}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18267) Missing NOTICE file in flink-examples-streaming

2020-06-11 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133895#comment-17133895
 ] 

Yangze Guo commented on FLINK-18267:


[~rmetzger] Could you kindly assign this to me?

> Missing NOTICE file in flink-examples-streaming
> ---
>
> Key: FLINK-18267
> URL: https://issues.apache.org/jira/browse/FLINK-18267
> Project: Flink
>  Issue Type: Bug
>  Components: Examples
>Affects Versions: 1.11.0
>Reporter: Yangze Guo
>Priority: Blocker
> Fix For: 1.11.0, 1.12.0
>
>
> Add missing NOTICE file in flink-examples-streaming since we introduce new 
> dependencies for {{jcuda}} and {{jcublas}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12624: [FLINK-18205] Mitigate the use of reflection in Utils and HadoopModule

2020-06-11 Thread GitBox


flinkbot commented on pull request #12624:
URL: https://github.com/apache/flink/pull/12624#issuecomment-643049794


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 7781df7876d274f3598e0620e007e9a362e2b6cf (Fri Jun 12 
03:50:56 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18205) Mitigate the use of reflection in HadoopModule and Utils

2020-06-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18205:
---
Labels: pull-request-available  (was: )

> Mitigate the use of reflection in HadoopModule and Utils
> 
>
> Key: FLINK-18205
> URL: https://issues.apache.org/jira/browse/FLINK-18205
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Since Flink drops support for Hadoop 1 since FLINK-4895, it would be good to 
> mitigate the use of reflection in {{HadoopModule}}. To be specific, we could 
> make sure the following methods exist in Hadoop 2+:
> - Credentials#getAllTokens
> - Credentials#readTokenStorageFile
> - UserGroupInformation#addCredentials



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] KarmaGYZ opened a new pull request #12624: [FLINK-18205] Mitigate the use of reflection in Utils and HadoopModule

2020-06-11 Thread GitBox


KarmaGYZ opened a new pull request #12624:
URL: https://github.com/apache/flink/pull/12624


   
   ## What is the purpose of the change
   
   Since Flink drops support for Hadoop 1 since FLINK-4895, it would be good to 
mitigate the use of reflection in HadoopModule and Utils. To be specific, we 
could make sure the following methods exist in Hadoop 2+:
   
   - Credentials#getAllTokens
   - Credentials#readTokenStorageFile
   - UserGroupInformation#addCredentials
   
   ## Brief change log
   
   Mitigate the use of reflection in Utils and HadoopModule
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12622: [FLINK-18261][parquet][orc] flink-orc and flink-parquet have invalid NOTICE file

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12622:
URL: https://github.com/apache/flink/pull/12622#issuecomment-643039828


   
   ## CI report:
   
   * b12eb7d44f3aa147a8580cd3fdeda71290dc6efb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3355)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12623: [FLINK-15687][runtime][test] Fix test instability due to concurrent access to JobTable.

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12623:
URL: https://github.com/apache/flink/pull/12623#issuecomment-643044583


   
   ## CI report:
   
   * 26a82902112ca6b5139b0afb7e7de6d9d58003d4 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3356)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12598: [FLINK-18246][python][e2e] Disable PyFlink e2e tests when running on jdk11 to avoid the UnsupportedClassVersionError.

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12598:
URL: https://github.com/apache/flink/pull/12598#issuecomment-642529967


   
   ## CI report:
   
   * a8010170fae4777d9cd60b65b737c71f86e33a8a Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3281)
 
   * 165e2965fc4b03f95bc0108200360b52ca4898c6 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] danny0405 commented on a change in pull request #12610: [FLINK-17686][doc] Add document to dataGen, print, blackhole connectors

2020-06-11 Thread GitBox


danny0405 commented on a change in pull request #12610:
URL: https://github.com/apache/flink/pull/12610#discussion_r439189092



##
File path: docs/dev/table/connectors/datagen.md
##
@@ -0,0 +1,149 @@
+---
+title: "DataGen SQL Connector"
+nav-title: DataGen
+nav-parent_id: sql-connectors
+nav-pos: 4
+---
+
+
+Scan Source: Bounded
+Scan Source: UnBounded
+
+* This will be replaced by the TOC
+{:toc}
+
+The Datagen connector allows for reading by data generation rules.
+
+The Datagen connector can work with [Computed Column syntax]({{ site.baseurl 
}}/dev/table/sql/create.html#create-table).
+This allows you to generate records flexibly.
+
+The Datagen connector is built-in.
+
+Attention Not support complex types: 
Array, Map, Row. Please construct these types by computed column.
+

Review comment:
   `Not support complex types` -> `Complex types are not supported`

##
File path: docs/dev/table/connectors/blackhole.md
##
@@ -0,0 +1,94 @@
+---
+title: "Blackhole SQL Connector"
+nav-title: Blackhole
+nav-parent_id: sql-connectors
+nav-pos: 6
+---
+
+
+Sink: Bounded
+Sink: UnBounded
+
+* This will be replaced by the TOC
+{:toc}
+
+The Blackhole connector allows for swallowing all input records. It is 
designed for:
+
+- high performance testing.
+- UDF to output, not substantive sink.
+
+Just like /dev/null device on Unix-like operating systems.
+
+The Print connector is built-in.
+
+How to create an Blackhole table
+

Review comment:
   an Blackhole -> `a Blackhole`

##
File path: docs/dev/table/connectors/blackhole.md
##
@@ -0,0 +1,94 @@
+---
+title: "Blackhole SQL Connector"
+nav-title: Blackhole
+nav-parent_id: sql-connectors
+nav-pos: 6
+---
+
+
+Sink: Bounded
+Sink: UnBounded
+
+* This will be replaced by the TOC
+{:toc}
+
+The Blackhole connector allows for swallowing all input records. It is 
designed for:
+
+- high performance testing.
+- UDF to output, not substantive sink.
+
+Just like /dev/null device on Unix-like operating systems.
+
+The Print connector is built-in.
+
+How to create an Blackhole table
+
+
+Although it doesn't make sense to define the fields of print table, you need 
to write them all in DDL.
+
+
+
+{% highlight sql %}
+CREATE TABLE blackhole_table (
+ f0 INT,
+ f1 INT,
+ f2 STRING,
+ f3 DOUBLE
+) WITH (
+ 'connector' = 'blackhole'
+)
+{% endhighlight %}
+
+
+
+Another way is using [LIKE Clause]({{ site.baseurl 
}}/dev/table/sql/create.html#create-table).
+
+
+
+{% highlight sql %}
+CREATE TABLE blackhole_table () WITH ('connector' = 'blackhole')
+LIKE source_table (EXCLUDING ALL)
+{% endhighlight %}

Review comment:
   The `()` can be dropped.

##
File path: docs/dev/table/connectors/datagen.md
##
@@ -0,0 +1,149 @@
+---
+title: "DataGen SQL Connector"
+nav-title: DataGen
+nav-parent_id: sql-connectors
+nav-pos: 4
+---
+
+
+Scan Source: Bounded
+Scan Source: UnBounded
+
+* This will be replaced by the TOC
+{:toc}
+
+The Datagen connector allows for reading by data generation rules.
+
+The Datagen connector can work with [Computed Column syntax]({{ site.baseurl 
}}/dev/table/sql/create.html#create-table).
+This allows you to generate records flexibly.
+
+The Datagen connector is built-in.
+
+Attention Not support complex types: 
Array, Map, Row. Please construct these types by computed column.
+
+How to create an Datagen table
+

Review comment:
   `an` -> `a`

##
File path: docs/dev/table/connectors/datagen.md
##
@@ -0,0 +1,149 @@
+---
+title: "DataGen SQL Connector"
+nav-title: DataGen
+nav-parent_id: sql-connectors
+nav-pos: 4
+---
+
+
+Scan Source: Bounded
+Scan Source: UnBounded
+
+* This will be replaced by the TOC
+{:toc}
+
+The Datagen connector allows for reading by data generation rules.
+
+The Datagen connector can work with [Computed Column syntax]({{ site.baseurl 
}}/dev/table/sql/create.html#create-table).
+This allows you to generate records flexibly.
+
+The Datagen connector is built-in.
+
+Attention Not support complex types: 
Array, Map, Row. Please construct these types by computed column.
+
+How to create an Datagen table
+
+
+For each field, there are two ways to generate data:
+
+- Random generator: default, you can specify random max and min values. For 
char/varchar/string, the length can be specified.
+- Sequence generator: you can specify sequence start and end values.

Review comment:
   `default` -> `By default`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18205) Mitigate the use of reflection in HadoopModule and Utils

2020-06-11 Thread Yangze Guo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yangze Guo updated FLINK-18205:
---
Summary: Mitigate the use of reflection in HadoopModule and Utils  (was: 
Mitigate the use of reflection in HadoopModule)

> Mitigate the use of reflection in HadoopModule and Utils
> 
>
> Key: FLINK-18205
> URL: https://issues.apache.org/jira/browse/FLINK-18205
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Major
> Fix For: 1.12.0
>
>
> Since Flink drops support for Hadoop 1 since FLINK-4895, it would be good to 
> mitigate the use of reflection in {{HadoopModule}}. To be specific, we could 
> make sure the following methods exist in Hadoop 2+:
> - Credentials#getAllTokens
> - Credentials#readTokenStorageFile
> - UserGroupInformation#addCredentials



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18205) Mitigate the use of reflection in HadoopModule

2020-06-11 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133889#comment-17133889
 ] 

Yangze Guo commented on FLINK-18205:


FYI, I also found the same usage in org.apache.flink.yarn.Utils. I plan to 
mitigate the usage of reflection in that class as well.

> Mitigate the use of reflection in HadoopModule
> --
>
> Key: FLINK-18205
> URL: https://issues.apache.org/jira/browse/FLINK-18205
> Project: Flink
>  Issue Type: Improvement
>  Components: Deployment / YARN
>Reporter: Yangze Guo
>Assignee: Yangze Guo
>Priority: Major
> Fix For: 1.12.0
>
>
> Since Flink drops support for Hadoop 1 since FLINK-4895, it would be good to 
> mitigate the use of reflection in {{HadoopModule}}. To be specific, we could 
> make sure the following methods exist in Hadoop 2+:
> - Credentials#getAllTokens
> - Credentials#readTokenStorageFile
> - UserGroupInformation#addCredentials



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18265) Hidden files should be ignored when the filesystem table searches for partitions

2020-06-11 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee reassigned FLINK-18265:


Assignee: godfrey he  (was: Jingsong Lee)

> Hidden files should be ignored when the filesystem table searches for 
> partitions
> 
>
> Key: FLINK-18265
> URL: https://issues.apache.org/jira/browse/FLINK-18265
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Table SQL / API
>Reporter: Jingsong Lee
>Assignee: godfrey he
>Priority: Blocker
> Fix For: 1.11.0
>
>
> If there are some hidden files in the path of filesystem partitioned table, 
> query this table will occur:
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Partition keys are: 
> [j], incomplete partition spec: {}
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource.toFullLinkedPartSpec(FileSystemTableSource.java:209)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource.access$800(FileSystemTableSource.java:62)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource$1.lambda$getPaths$0(FileSystemTableSource.java:174)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) 
> ~[?:1.8.0_152]
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>  ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545) 
> ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
>  ~[?:1.8.0_152]
> at 
> java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) 
> ~[?:1.8.0_152]
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource$1.getPaths(FileSystemTableSource.java:177)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> {code}
> Hidden files should be ignored when the filesystem table searches for 
> partitions. This is not correct partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12623: [FLINK-15687][runtime][test] Fix test instability due to concurrent access to JobTable.

2020-06-11 Thread GitBox


flinkbot commented on pull request #12623:
URL: https://github.com/apache/flink/pull/12623#issuecomment-643044583


   
   ## CI report:
   
   * 26a82902112ca6b5139b0afb7e7de6d9d58003d4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12622: [FLINK-18261][parquet][orc] flink-orc and flink-parquet have invalid NOTICE file

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12622:
URL: https://github.com/apache/flink/pull/12622#issuecomment-643039828


   
   ## CI report:
   
   * b12eb7d44f3aa147a8580cd3fdeda71290dc6efb Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3355)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12621: [FLINK-16976][docs-zh] Update chinese documentation for ListCheckpoin…

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12621:
URL: https://github.com/apache/flink/pull/12621#issuecomment-643039777


   
   ## CI report:
   
   * 9c04ca36234101ce6b0886a062dcc16977834a96 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3354)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-18266) 在表中使用proctime时,如果sql中不对强转,任务输出有问题

2020-06-11 Thread liqi316 (Jira)
liqi316 created FLINK-18266:
---

 Summary: 在表中使用proctime时,如果sql中不对强转,任务输出有问题
 Key: FLINK-18266
 URL: https://issues.apache.org/jira/browse/FLINK-18266
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / API, Table SQL / Planner
Affects Versions: 1.9.1
Reporter: liqi316


当注册表的时候,指定了proctime,但是在select * from talbe 的时候会报类型无法转换。

Caused by: java.lang.ClassCastException: java.time.LocalDateTime cannot be cast 
to java.lang.LongCaused by: java.lang.ClassCastException: 
java.time.LocalDateTime cannot be cast to java.lang.Long at 
org.apache.flink.api.common.typeutils.base.LongSerializer.copy(LongSerializer.java:32)
 at 
org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:93)
 at 
org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:44)
 at 
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:635)
 ... 51 more

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439183783



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -646,10 +653,14 @@ protected void cleanUpInvoke() throws Exception {
try {
channelIOExecutor.shutdown();
} catch (Throwable t) {
-   LOG.error("Error during shutdown the channel state 
unspill executor", t);
+   suppressedThrowable = 
ExceptionUtils.firstOrSuppressed(t, suppressedThrowable);
}
 
mailboxProcessor.close();
+
+   if (suppressedThrowable != null) {
+   throw (Exception) suppressedThrowable;

Review comment:
   It does not, but have to be transmitted to Exception either here or in 
the function that caught it (outside this function).





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18265) Hidden files should be ignored when the filesystem table searches for partitions

2020-06-11 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-18265:
-
Component/s: Table SQL / API

> Hidden files should be ignored when the filesystem table searches for 
> partitions
> 
>
> Key: FLINK-18265
> URL: https://issues.apache.org/jira/browse/FLINK-18265
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem, Table SQL / API
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.11.0
>
>
> If there are some hidden files in the path of filesystem partitioned table, 
> query this table will occur:
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Partition keys are: 
> [j], incomplete partition spec: {}
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource.toFullLinkedPartSpec(FileSystemTableSource.java:209)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource.access$800(FileSystemTableSource.java:62)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource$1.lambda$getPaths$0(FileSystemTableSource.java:174)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) 
> ~[?:1.8.0_152]
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>  ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545) 
> ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
>  ~[?:1.8.0_152]
> at 
> java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) 
> ~[?:1.8.0_152]
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource$1.getPaths(FileSystemTableSource.java:177)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> {code}
> Hidden files should be ignored when the filesystem table searches for 
> partitions. This is not correct partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439183586



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -613,25 +614,31 @@ protected void cleanUpInvoke() throws Exception {
// stop all timers and threads
tryShutdownTimerService();
 
+   Throwable suppressedThrowable = null;
// stop all asynchronous checkpoint threads
try {
cancelables.close();
shutdownAsyncThreads();
} catch (Throwable t) {
-   // catch and log the exception to not replace the 
original exception
-   LOG.error("Could not shut down async checkpoint 
threads", t);
+   // catch and suppress the exception to not replace the 
original exception
+   suppressedThrowable = 
ExceptionUtils.firstOrSuppressed(t, suppressedThrowable);
}
 
// we must! perform this cleanup
try {
cleanup();
} catch (Throwable t) {
-   // catch and log the exception to not replace the 
original exception
-   LOG.error("Error during cleanup of stream task", t);
+   // catch and suppress the exception to not replace the 
original exception
+   suppressedThrowable = 
ExceptionUtils.firstOrSuppressed(t, suppressedThrowable);
}
 
// if the operators were not disposed before, do a hard dispose
-   disposeAllOperators(true);
+   try {
+   disposeAllOperators(true);

Review comment:
   I do not think so. Maybe we can change the argument name, but the 
meaning of logOnlyErrors true/false is different.
   
   "False" means `disposalAllOperators` throws an exception as soon as it 
encounters an error.
   
   "True" means `disposalAllOperators` disposes of all operators even though 
encountering errors, and then throw the stacked exceptions.






This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] lirui-apache commented on a change in pull request #12609: [FLINK-17836][hive][doc] Add document for Hive dim join

2020-06-11 Thread GitBox


lirui-apache commented on a change in pull request #12609:
URL: https://github.com/apache/flink/pull/12609#discussion_r439183606



##
File path: docs/dev/table/hive/hive_streaming.md
##
@@ -163,4 +163,33 @@ SELECT * FROM hive_table /*+ 
OPTIONS('streaming-source.enable'='true', 'streamin
 
 ## Hive Table As Temporal Tables
 
-TODO
+Starting from Flink 1.11.0, you can use a Hive table as temporal table and 
join streaming data with it. Please follow
+the [example]({{ site.baseurl 
}}/dev/table/streaming/temporal_tables.html#temporal-table) to find out how to 
join a
+temporal table. When performing the join, the Hive table will be cached in TM 
memory and each record from the stream
+is looked up in the Hive table to decide whether a match is found. You don't 
need any extra settings to use a Hive table
+as temporal table. But optionally, you can configure the TTL of the Hive table 
cache with the following
+property. After the cache expires, the Hive table will be scanned again to 
load the latest data.
+
+
+  
+
+Key
+Default
+Type
+Description
+
+  
+  
+
+lookup.join.cache.ttl
+60 min
+Duration
+The cache TTL (e.g. 10min) for the build table in lookup join. By 
default the TTL is 60 minutes.
+
+  
+
+
+**Note**:
+1. You need to make sure the Hive table can fit into TM memory since the whole 
table will be cached.
+2. You should set a relatively large value for `lookup.join.cache.ttl`. You'll 
probably have performance issue if
+your Hive table needs to be updated and reloaded too frequently.

Review comment:
   I have mentioned that the whole table will be cached. And the temporal 
table can be either partitioned or non-partitioned. It seems to me that talking 
about new/old partitions here might bring more confusions.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12604: [FLINK-18256][orc] Exclude ORC's Hadoop dependency and pull in provided vanilla hadoop in flink-orc

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12604:
URL: https://github.com/apache/flink/pull/12604#issuecomment-642611183


   
   ## CI report:
   
   * 962a120dbdef84aad1f0db143e88bf1c0ba09b2a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3295)
 
   * 778634cee06f83247f7c59797c778ec560b9cd4a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3350)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12620: [FLINK-18226][runtime] Fix ActiveResourceManager requesting extra workers on termination of existing workers.

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12620:
URL: https://github.com/apache/flink/pull/12620#issuecomment-643030171


   
   ## CI report:
   
   * 4123e5f74a0c4f312588fdaf616dce9b6899371b Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3351)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12622: [FLINK-18261][parquet][orc] flink-orc and flink-parquet have invalid NOTICE file

2020-06-11 Thread GitBox


flinkbot commented on pull request #12622:
URL: https://github.com/apache/flink/pull/12622#issuecomment-643039828


   
   ## CI report:
   
   * b12eb7d44f3aa147a8580cd3fdeda71290dc6efb UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12621: [FLINK-16976][docs-zh] Update chinese documentation for ListCheckpoin…

2020-06-11 Thread GitBox


flinkbot commented on pull request #12621:
URL: https://github.com/apache/flink/pull/12621#issuecomment-643039777


   
   ## CI report:
   
   * 9c04ca36234101ce6b0886a062dcc16977834a96 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12577: [FLINK-17599][docs] Update documents due to FLIP-84

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12577:
URL: https://github.com/apache/flink/pull/12577#issuecomment-641885509


   
   ## CI report:
   
   * 91cf992ad59795acee6207899d46e243c5d2a7fb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3326)
 
   * 1c609f5b335fab9206b4fabe470899ede61333ff Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3349)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12297: [FLINK-12855] [streaming-java][window-assigners] Add functionality that staggers panes on partitions to distribute workload.

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12297:
URL: https://github.com/apache/flink/pull/12297#issuecomment-632945310


   
   ## CI report:
   
   * 0a113a9888b426fc59223e71f9b407f2f26793b9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3346)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439182569



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -613,25 +614,31 @@ protected void cleanUpInvoke() throws Exception {
// stop all timers and threads
tryShutdownTimerService();
 
+   Throwable suppressedThrowable = null;
// stop all asynchronous checkpoint threads
try {
cancelables.close();
shutdownAsyncThreads();
} catch (Throwable t) {
-   // catch and log the exception to not replace the 
original exception
-   LOG.error("Could not shut down async checkpoint 
threads", t);
+   // catch and suppress the exception to not replace the 
original exception
+   suppressedThrowable = 
ExceptionUtils.firstOrSuppressed(t, suppressedThrowable);
}
 
// we must! perform this cleanup
try {
cleanup();
} catch (Throwable t) {
-   // catch and log the exception to not replace the 
original exception
-   LOG.error("Error during cleanup of stream task", t);
+   // catch and suppress the exception to not replace the 
original exception
+   suppressedThrowable = 
ExceptionUtils.firstOrSuppressed(t, suppressedThrowable);

Review comment:
   what is the suggestion?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17579) Set the resource id of taskexecutor according to environment variable if exist in standalone mode

2020-06-11 Thread Yangze Guo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133867#comment-17133867
 ] 

Yangze Guo commented on FLINK-17579:


I ask because I do not know whether we plan to support local recovery in 
standalone mode as well. If we plan to, it seems we could not restart a 
{{TaskManager}} with the fixed {{ResourceID}} (hostname could be fixed but uuid 
would be different each time). Do you have some suggestions/ideas to achieve it?

BTW, it seems we do not support to restart a TM with a fixed {{ResourceID}} in 
the standalone mode now. I think this proposal will not introduce any 
regression.

> Set the resource id of taskexecutor according to environment variable if 
> exist in standalone mode
> -
>
> Key: FLINK-17579
> URL: https://issues.apache.org/jira/browse/FLINK-17579
> Project: Flink
>  Issue Type: Sub-task
>  Components: Runtime / Coordination
>Reporter: Yangze Guo
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> Allow user to specify the resource id of TaskExecutor through the environment 
> variable in standalone mode. The name of that variable could be 
> {{FLINK_STANDALONE_TASK_EXECUTOR_ID}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18265) Hidden files should be ignored when the filesystem table searches for partitions

2020-06-11 Thread Jingsong Lee (Jira)
Jingsong Lee created FLINK-18265:


 Summary: Hidden files should be ignored when the filesystem table 
searches for partitions
 Key: FLINK-18265
 URL: https://issues.apache.org/jira/browse/FLINK-18265
 Project: Flink
  Issue Type: Bug
  Components: Connectors / FileSystem
Reporter: Jingsong Lee
Assignee: Jingsong Lee
 Fix For: 1.11.0


If there are some hidden files in the path of filesystem partitioned table, 
query this table will occur:
{code:java}
Caused by: org.apache.flink.table.api.TableException: Partition keys are: [j], 
incomplete partition spec: {}
at 
org.apache.flink.table.filesystem.FileSystemTableSource.toFullLinkedPartSpec(FileSystemTableSource.java:209)
 ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
at 
org.apache.flink.table.filesystem.FileSystemTableSource.access$800(FileSystemTableSource.java:62)
 ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
at 
org.apache.flink.table.filesystem.FileSystemTableSource$1.lambda$getPaths$0(FileSystemTableSource.java:174)
 ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) 
~[?:1.8.0_152]
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382) 
~[?:1.8.0_152]
at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
~[?:1.8.0_152]
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
~[?:1.8.0_152]
at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545) 
~[?:1.8.0_152]
at 
java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
 ~[?:1.8.0_152]
at 
java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) 
~[?:1.8.0_152]
at 
org.apache.flink.table.filesystem.FileSystemTableSource$1.getPaths(FileSystemTableSource.java:177)
 ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
{code}
Hidden files should be ignored when the filesystem table searches for 
partitions. This is not correct partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439182059



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -613,25 +614,31 @@ protected void cleanUpInvoke() throws Exception {
// stop all timers and threads
tryShutdownTimerService();
 
+   Throwable suppressedThrowable = null;
// stop all asynchronous checkpoint threads
try {
cancelables.close();
shutdownAsyncThreads();
} catch (Throwable t) {
-   // catch and log the exception to not replace the 
original exception
-   LOG.error("Could not shut down async checkpoint 
threads", t);
+   // catch and suppress the exception to not replace the 
original exception
+   suppressedThrowable = 
ExceptionUtils.firstOrSuppressed(t, suppressedThrowable);

Review comment:
   What is the suggestion here?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18066) Add documentation for how to develop a new table source/sink

2020-06-11 Thread wangsong (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133863#comment-17133863
 ] 

wangsong commented on FLINK-18066:
--

Hi, [~twalthr] 
    When will this issue be completed?  I am translating this document into 
Chinese. The markdown file is located in 
{{flink/docs/dev/table/sourceSinks.zh.md}}
Do I need to wait for the issue to be completed and translate the latest 
document?
Thank you 

> Add documentation for how to develop a new table source/sink
> 
>
> Key: FLINK-18066
> URL: https://issues.apache.org/jira/browse/FLINK-18066
> Project: Flink
>  Issue Type: Sub-task
>  Components: Documentation, Table SQL / API
>Reporter: Timo Walther
>Assignee: Timo Walther
>Priority: Critical
>
> Covers how to write a custom source/sink and format using FLIP-95 interfaces.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18265) Hidden files should be ignored when the filesystem table searches for partitions

2020-06-11 Thread Jingsong Lee (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jingsong Lee updated FLINK-18265:
-
Priority: Blocker  (was: Major)

> Hidden files should be ignored when the filesystem table searches for 
> partitions
> 
>
> Key: FLINK-18265
> URL: https://issues.apache.org/jira/browse/FLINK-18265
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Reporter: Jingsong Lee
>Assignee: Jingsong Lee
>Priority: Blocker
> Fix For: 1.11.0
>
>
> If there are some hidden files in the path of filesystem partitioned table, 
> query this table will occur:
> {code:java}
> Caused by: org.apache.flink.table.api.TableException: Partition keys are: 
> [j], incomplete partition spec: {}
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource.toFullLinkedPartSpec(FileSystemTableSource.java:209)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource.access$800(FileSystemTableSource.java:62)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource$1.lambda$getPaths$0(FileSystemTableSource.java:174)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> at 
> java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193) 
> ~[?:1.8.0_152]
> at 
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
>  ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
> ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
> ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:545) 
> ~[?:1.8.0_152]
> at 
> java.util.stream.AbstractPipeline.evaluateToArrayNode(AbstractPipeline.java:260)
>  ~[?:1.8.0_152]
> at 
> java.util.stream.ReferencePipeline.toArray(ReferencePipeline.java:438) 
> ~[?:1.8.0_152]
> at 
> org.apache.flink.table.filesystem.FileSystemTableSource$1.getPaths(FileSystemTableSource.java:177)
>  ~[flink-table-blink_2.11-1.12-SNAPSHOT.jar:1.12-SNAPSHOT]
> {code}
> Hidden files should be ignored when the filesystem table searches for 
> partitions. This is not correct partition.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17292) Translate Fault Tolerance training lesson to Chinese

2020-06-11 Thread Bai Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133861#comment-17133861
 ] 

Bai Xu commented on FLINK-17292:


[~alpinegizmo] I'm so sorry. I was not sure if I really could do this task, 
because this task has not been assigned to me before.You can assgin this ticket 
to [~RocMarshal] now.

> Translate Fault Tolerance training lesson to Chinese
> 
>
> Key: FLINK-17292
> URL: https://issues.apache.org/jira/browse/FLINK-17292
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation, Documentation / Training
>Reporter: David Anderson
>Priority: Major
>
> This ticket is about translating the new tutorial in 
> docs/training/fault_tolerance.zh.md.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439181920



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -613,25 +614,31 @@ protected void cleanUpInvoke() throws Exception {
// stop all timers and threads
tryShutdownTimerService();
 
+   Throwable suppressedThrowable = null;
// stop all asynchronous checkpoint threads
try {
cancelables.close();
shutdownAsyncThreads();
} catch (Throwable t) {
-   // catch and log the exception to not replace the 
original exception
-   LOG.error("Could not shut down async checkpoint 
threads", t);
+   // catch and suppress the exception to not replace the 
original exception

Review comment:
   OK, I agree.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12623: [FLINK-15687][runtime][test] Fix test instability due to concurrent access to JobTable.

2020-06-11 Thread GitBox


flinkbot commented on pull request #12623:
URL: https://github.com/apache/flink/pull/12623#issuecomment-643038551


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 26a82902112ca6b5139b0afb7e7de6d9d58003d4 (Fri Jun 12 
03:03:43 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439181590



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -613,25 +614,31 @@ protected void cleanUpInvoke() throws Exception {
// stop all timers and threads
tryShutdownTimerService();

Review comment:
   I have no idea, the current change does not change the behavior of 
`cleanUpInvoke`. 
   
   I am a bit hesitant to make changes to change the behavior.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18240) Correct modulus function usage in documentation or allow % operator

2020-06-11 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133858#comment-17133858
 ] 

Benchao Li commented on FLINK-18240:


[~Leonard Xu] I think you are right, we can support it this way. +1 to support 
it.

> Correct modulus function usage in documentation or allow % operator
> ---
>
> Key: FLINK-18240
> URL: https://issues.apache.org/jira/browse/FLINK-18240
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation, Table SQL / API
>Reporter: Jark Wu
>Priority: Major
>
> In the documentation: 
> https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/sql/queries.html#scan-projection-and-filter
> There is an example:
> {code}
> SELECT * FROM Orders WHERE a % 2 = 0
> {code}
> But % operator is not allowed in Flink:
> {code:java}
> org.apache.calcite.sql.parser.SqlParseException: Percent remainder '%' is not 
> allowed under the current SQL conformance level
> {code}
> Either we correct the documentation to use {{MOD}} function, or allow % 
> operator. 
> This is reported in user-zh ML: 
> http://apache-flink.147419.n8.nabble.com/FLINK-SQL-td3822.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17182) RemoteInputChannelTest.testConcurrentOnSenderBacklogAndRecycle fail on azure

2020-06-11 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao closed FLINK-17182.
---

> RemoteInputChannelTest.testConcurrentOnSenderBacklogAndRecycle fail on azure
> 
>
> Key: FLINK-17182
> URL: https://issues.apache.org/jira/browse/FLINK-17182
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.11.0
>Reporter: Dawid Wysakowicz
>Assignee: Yun Gao
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7546=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=d2c1c472-9d7b-5913-b8e4-461f3092fb7a
> {code}
> [ERROR] Tests run: 21, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 3.943 s <<< FAILURE! - in 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannelTest
> [ERROR] 
> testConcurrentOnSenderBacklogAndRecycle(org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannelTest)
>   Time elapsed: 0.011 s  <<< FAILURE!
> java.lang.AssertionError: There should be 248 buffers available in channel. 
> expected:<248> but was:<238>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannelTest.testConcurrentOnSenderBacklogAndRecycle(RemoteInputChannelTest.java:869)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17182) RemoteInputChannelTest.testConcurrentOnSenderBacklogAndRecycle fail on azure

2020-06-11 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133857#comment-17133857
 ] 

Yun Gao commented on FLINK-17182:
-

Fix via:

   master: f88d98da41957beca84d1807319a5fb004cd02f8
1.11: 22098b27c32342f3ef74848a86d4b4d8d3c1dfb8

> RemoteInputChannelTest.testConcurrentOnSenderBacklogAndRecycle fail on azure
> 
>
> Key: FLINK-17182
> URL: https://issues.apache.org/jira/browse/FLINK-17182
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.11.0
>Reporter: Dawid Wysakowicz
>Assignee: Yun Gao
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7546=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=d2c1c472-9d7b-5913-b8e4-461f3092fb7a
> {code}
> [ERROR] Tests run: 21, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 3.943 s <<< FAILURE! - in 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannelTest
> [ERROR] 
> testConcurrentOnSenderBacklogAndRecycle(org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannelTest)
>   Time elapsed: 0.011 s  <<< FAILURE!
> java.lang.AssertionError: There should be 248 buffers available in channel. 
> expected:<248> but was:<238>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannelTest.testConcurrentOnSenderBacklogAndRecycle(RemoteInputChannelTest.java:869)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-17182) RemoteInputChannelTest.testConcurrentOnSenderBacklogAndRecycle fail on azure

2020-06-11 Thread Yun Gao (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yun Gao resolved FLINK-17182.
-
Resolution: Fixed

> RemoteInputChannelTest.testConcurrentOnSenderBacklogAndRecycle fail on azure
> 
>
> Key: FLINK-17182
> URL: https://issues.apache.org/jira/browse/FLINK-17182
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Network
>Affects Versions: 1.11.0
>Reporter: Dawid Wysakowicz
>Assignee: Yun Gao
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.11.0
>
>
> https://dev.azure.com/rmetzger/Flink/_build/results?buildId=7546=logs=0da23115-68bb-5dcd-192c-bd4c8adebde1=d2c1c472-9d7b-5913-b8e4-461f3092fb7a
> {code}
> [ERROR] Tests run: 21, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 
> 3.943 s <<< FAILURE! - in 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannelTest
> [ERROR] 
> testConcurrentOnSenderBacklogAndRecycle(org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannelTest)
>   Time elapsed: 0.011 s  <<< FAILURE!
> java.lang.AssertionError: There should be 248 buffers available in channel. 
> expected:<248> but was:<238>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at 
> org.apache.flink.runtime.io.network.partition.consumer.RemoteInputChannelTest.testConcurrentOnSenderBacklogAndRecycle(RemoteInputChannelTest.java:869)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>   at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>   at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18199) Translate "Filesystem SQL Connector" page into Chinese

2020-06-11 Thread michaelli (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133855#comment-17133855
 ] 

michaelli commented on FLINK-18199:
---

working on this..

> Translate "Filesystem SQL Connector" page into Chinese
> --
>
> Key: FLINK-18199
> URL: https://issues.apache.org/jira/browse/FLINK-18199
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Connectors / FileSystem, 
> Documentation, Table SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: michaelli
>Priority: Major
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/filesystem.html
> The markdown file is located in 
> flink/docs/dev/table/connectors/filesystem.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zhijiangW merged pull request #12613: [FLINK-17182][network] Fix the unstable input channel test by recycling exclusive and floating buffer in the same thread

2020-06-11 Thread GitBox


zhijiangW merged pull request #12613:
URL: https://github.com/apache/flink/pull/12613


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zhijiangW merged pull request #11924: [FLINK-17182][network] Fix the unstable input channel test by recycling exclusive and floating buffer in the same thread

2020-06-11 Thread GitBox


zhijiangW merged pull request #11924:
URL: https://github.com/apache/flink/pull/11924


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] xintongsong opened a new pull request #12623: [FLINK-15687][runtime][test] Fix test instability due to concurrent access to JobTable.

2020-06-11 Thread GitBox


xintongsong opened a new pull request #12623:
URL: https://github.com/apache/flink/pull/12623


   ## What is the purpose of the change
   
   This PR fixes the test instabilities due to concurrent access to JobTable.
   
   ## Brief change log
   
   Added an argument `MainThreadExecutable` for 
`TaskSubmissionTestEnvironment#registerJobMasterConnection` to guarantee it's 
always executed in the RPC main thread.
   
   ## Verifying this change
   
   Manually verified by printing all the access thread names in 
`DefaultJobTable`.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (nono)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18263) Allow external checkpoints to be persisted even when the job is in "Finished" state.

2020-06-11 Thread Yun Tang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133854#comment-17133854
 ] 

Yun Tang commented on FLINK-18263:
--

I think this future depends on how we give definition for ‘{{FINISHED}}’ job 
status. If all tasks are finished, why we still need to keep that checkpoint as 
that job would already complete its life-cycle. CC [~zjwang], [~zhuzh] as they 
might give more thoughts on job status definition.

As you mentioned, we could rewind a job (that reached the FINISHED state) to a 
previous checkpoint if retained on FINISHED status. However, the time of last 
checkpoint would not be so accurate, I don't know how much this could 
contribute and manual savepoint might be more useful in your scenario.

> Allow external checkpoints to be persisted even when the job is in "Finished" 
> state.
> 
>
> Key: FLINK-18263
> URL: https://issues.apache.org/jira/browse/FLINK-18263
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing
>Reporter: Mark Cho
>Priority: Major
>  Labels: pull-request-available
>
> Currently, `execution.checkpointing.externalized-checkpoint-retention` 
> configuration supports two options:
> - `DELETE_ON_CANCELLATION` which keeps the externalized checkpoints in FAILED 
> and SUSPENDED state.
> - `RETAIN_ON_CANCELLATION` which keeps the externalized checkpoints in 
> FAILED, SUSPENDED, and CANCELED state.
> This gives us control over the retention of externalized checkpoints in all 
> terminal state of a job, except for the FINISHED state.
> If the job ends up in "FINISHED" state, externalized checkpoints will be 
> automatically cleaned up and there currently is no config that will ensure 
> that these externalized checkpoints to be persisted.
> I found an old Jira ticket FLINK-4512 where this was discussed. I think it 
> would be helpful to have a config that can control the retention policy for 
> FINISHED state as well.
> - This can be useful for cases where we want to rewind a job (that reached 
> the FINISHED state) to a previous checkpoint.
> - When we use externalized checkpoints, we want to fully delegate the 
> checkpoint clean-up to an external process in all job states (without 
> cherrypicking FINISHED state to be cleaned up by Flink).
> We have a quick fix working in our fork where we've changed 
> `ExternalizedCheckpointCleanup` enum:
> {code:java}
> RETAIN_ON_FAILURE (renamed from DELETE_ON_CANCELLATION; retains on FAILED)
> RETAIN_ON_CANCELLATION (kept the same; retains on FAILED, CANCELED)
> RETAIN_ON_SUCCESS (added; retains on FAILED, CANCELED, FINISHED)
> {code}
> Since this change requires changes to multiple components (e.g. config 
> values, REST API, Web UI, etc), I wanted to get the community's thoughts 
> before I invest more time in my quick fix PR (which currently only contains 
> minimal change to get this working).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18044) Add the subtask index information to the SourceReaderContext.

2020-06-11 Thread liufangliang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17133852#comment-17133852
 ] 

liufangliang commented on FLINK-18044:
--

Hi [~becket_qin],

can i pack this issue?

my solution is to add a methed named indexOfSubtask to interface 
SourceReaderContext,as following 
{code:java}
/**
 * @return The index of this subtask
 */
int indexOfSubtask();{code}
And then ,implement this method in the open() method of class SourceOperator,as 
following
{code:java}
@Override
public int indexOfSubtask(){
   return getRuntimeContext().getIndexOfThisSubtask();

}
{code}
what do you think of it ?

 

 

> Add the subtask index information to the SourceReaderContext.
> -
>
> Key: FLINK-18044
> URL: https://issues.apache.org/jira/browse/FLINK-18044
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Jiangjie Qin
>Priority: Major
>
> It is useful for the `SourceReader` to retrieve its subtask id. For example, 
> Kafka readers can create a consumer with proper client id.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439176768



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -537,6 +537,7 @@ public final void invoke() throws Exception {
afterInvoke();

Review comment:
   mark this as resolved according to offline discussion.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12622: [FLINK-18261][parquet][orc] flink-orc and flink-parquet have invalid NOTICE file

2020-06-11 Thread GitBox


flinkbot commented on pull request #12622:
URL: https://github.com/apache/flink/pull/12622#issuecomment-643032876


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b12eb7d44f3aa147a8580cd3fdeda71290dc6efb (Fri Jun 12 
02:40:50 UTC 2020)
   
   **Warnings:**
* **6 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] curcur commented on a change in pull request #12525: [FLINK-17769] [runtime] Fix the order of log events on a task failure

2020-06-11 Thread GitBox


curcur commented on a change in pull request #12525:
URL: https://github.com/apache/flink/pull/12525#discussion_r439176121



##
File path: 
flink-streaming-java/src/main/java/org/apache/flink/streaming/runtime/tasks/StreamTask.java
##
@@ -702,11 +704,14 @@ private void disposeAllOperators(boolean logOnlyErrors) 
throws Exception {
operator.dispose();
}
catch (Exception e) {
-   LOG.error("Error during 
disposal of stream operator.", e);
+   disposalException = 
ExceptionUtils.firstOrSuppressed(e, disposalException);
}
}
}
disposedOperators = true;
+   if (disposalException != null) {
+   throw disposalException;

Review comment:
   Mark this as resolved based on offline sync up.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] godfreyhe commented on a change in pull request #12610: [FLINK-17686][doc] Add document to dataGen, print, blackhole connectors

2020-06-11 Thread GitBox


godfreyhe commented on a change in pull request #12610:
URL: https://github.com/apache/flink/pull/12610#discussion_r439172876



##
File path: docs/dev/table/connectors/datagen.md
##
@@ -0,0 +1,149 @@
+---
+title: "DataGen SQL Connector"
+nav-title: DataGen
+nav-parent_id: sql-connectors
+nav-pos: 4
+---
+
+
+Scan Source: Bounded
+Scan Source: UnBounded
+
+* This will be replaced by the TOC
+{:toc}
+
+The Datagen connector allows for reading by data generation rules.
+
+The Datagen connector can work with [Computed Column syntax]({{ site.baseurl 
}}/dev/table/sql/create.html#create-table).
+This allows you to generate records flexibly.
+
+The Datagen connector is built-in.
+
+Attention Not support complex types: 
Array, Map, Row. Please construct these types by computed column.
+
+How to create an Datagen table
+
+
+For each field, there are two ways to generate data:
+
+- Random generator: default, you can specify random max and min values. For 
char/varchar/string, the length can be specified.
+- Sequence generator: you can specify sequence start and end values.

Review comment:
   explain more about the behavior of after reaching the end value ?

##
File path: docs/dev/table/connectors/print.md
##
@@ -0,0 +1,119 @@
+---
+title: "Print SQL Connector"
+nav-title: Print
+nav-parent_id: sql-connectors
+nav-pos: 5
+---
+
+
+Sink: Bounded
+Sink: UnBounded
+
+* This will be replaced by the TOC
+{:toc}
+
+The Print connector allows for writing every row to the standard output or 
standard error stream.
+
+It is designed for:
+
+- Easy test for streaming job.
+- Very useful in production debugging.
+
+Four possible format options:
+
+- PRINT_IDENTIFIER:taskId> output  <- PRINT_IDENTIFIER provided, parallelism > 
1
+- PRINT_IDENTIFIER> output <- PRINT_IDENTIFIER provided, parallelism 
== 1
+- taskId> output  <- no PRINT_IDENTIFIER provided, 
parallelism > 1
+- output  <- no PRINT_IDENTIFIER provided, 
parallelism == 1
+
+The output string format is "$RowKind(f0,f1,f2...)", example is: "+I(1,1)".

Review comment:
   explain more about RowKind here or give a link where already explained 
it?

##
File path: docs/dev/table/connectors/print.md
##
@@ -0,0 +1,119 @@
+---
+title: "Print SQL Connector"
+nav-title: Print
+nav-parent_id: sql-connectors
+nav-pos: 5
+---
+
+
+Sink: Bounded
+Sink: UnBounded
+
+* This will be replaced by the TOC
+{:toc}
+
+The Print connector allows for writing every row to the standard output or 
standard error stream.
+
+It is designed for:
+
+- Easy test for streaming job.
+- Very useful in production debugging.
+
+Four possible format options:
+
+- PRINT_IDENTIFIER:taskId> output  <- PRINT_IDENTIFIER provided, parallelism > 
1

Review comment:
   The web display is not friendly





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18261) flink-orc and flink-parquet have invalid NOTICE file

2020-06-11 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18261:
---
Labels: pull-request-available  (was: )

> flink-orc and flink-parquet have invalid NOTICE file
> 
>
> Key: FLINK-18261
> URL: https://issues.apache.org/jira/browse/FLINK-18261
> Project: Flink
>  Issue Type: Bug
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile)
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Assignee: Jingsong Lee
>Priority: Blocker
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> flink-orc provides a {{-jar-with-dependencies.jar}} variant which ships 
> binaries.
> However, these binaries are not documented in {{META-INF/NOTICE}}.
> There are two similar files in that directory (NOTICE from force-shading and 
> NOTICE.txt from Commons Lang). 
> There is a NOTICE file that looks valid, but it is in {{META-INF/services}}.
> I assume this has been introduced in FLINK-17460.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] JingsongLi opened a new pull request #12622: [FLINK-18261][parquet][orc] flink-orc and flink-parquet have invalid NOTICE file

2020-06-11 Thread GitBox


JingsongLi opened a new pull request #12622:
URL: https://github.com/apache/flink/pull/12622


   
   ## What is the purpose of the change
   
   flink-orc and flink-parquet provides a jar-with-dependencies.jar variant 
which ships binaries.
   However, these binaries are not documented in META-INF/NOTICE.
   There are two similar files in that directory (NOTICE from force-shading and 
NOTICE.txt from Commons Lang).
   
   There is a NOTICE file that looks valid, but it is in META-INF/services.
   
   This has been introduced in FLINK-17460.
   
   ## Brief change log
   
   - Introduce flink-sql-orc to create orc bundled jar.
   - Introduce flink-sql-parquet to create parquet bundled jar.
   
   ## Verifying this change
   
   - Manually check NOTICE file.
   - Manually test bundled jar for sql filesystem connector.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12621: [FLINK-16976][docs-zh] Update chinese documentation for ListCheckpoin…

2020-06-11 Thread GitBox


flinkbot commented on pull request #12621:
URL: https://github.com/apache/flink/pull/12621#issuecomment-643030555


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 9c04ca36234101ce6b0886a062dcc16977834a96 (Fri Jun 12 
02:31:43 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12620: [FLINK-18226][runtime] Fix ActiveResourceManager requesting extra workers on termination of existing workers.

2020-06-11 Thread GitBox


flinkbot commented on pull request #12620:
URL: https://github.com/apache/flink/pull/12620#issuecomment-643030171


   
   ## CI report:
   
   * 4123e5f74a0c4f312588fdaf616dce9b6899371b UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12604: [FLINK-18256][orc] Exclude ORC's Hadoop dependency and pull in provided vanilla hadoop in flink-orc

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12604:
URL: https://github.com/apache/flink/pull/12604#issuecomment-642611183


   
   ## CI report:
   
   * 962a120dbdef84aad1f0db143e88bf1c0ba09b2a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3295)
 
   * 778634cee06f83247f7c59797c778ec560b9cd4a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12577: [FLINK-17599][docs] Update documents due to FLIP-84

2020-06-11 Thread GitBox


flinkbot edited a comment on pull request #12577:
URL: https://github.com/apache/flink/pull/12577#issuecomment-641885509


   
   ## CI report:
   
   * 91cf992ad59795acee6207899d46e243c5d2a7fb Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3326)
 
   * 1c609f5b335fab9206b4fabe470899ede61333ff UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zzh1985 opened a new pull request #12621: [FLINK-16976][docs-zh] Update chinese documentation for ListCheckpoin…

2020-06-11 Thread GitBox


zzh1985 opened a new pull request #12621:
URL: https://github.com/apache/flink/pull/12621


   …ted deprecation
   
   
   
   ## What is the purpose of the change
   
   Update chinese documentation for ListCheckpointed deprecation[FLINK-6258]
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): (no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   5   6   7   8   >