[jira] [Closed] (FLINK-14977) Add informational primary key constraints in Table API

2020-06-23 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-14977?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-14977.
---
Resolution: Fixed

All sub-tasks are resolved. 

> Add informational primary key constraints in Table API
> --
>
> Key: FLINK-14977
> URL: https://issues.apache.org/jira/browse/FLINK-14977
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>
> Corresponding flip: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP+87%3A+Primary+key+constraints+in+Table+API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-14977) Add informational primary key constraints in Table API

2020-06-23 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-14977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143531#comment-17143531
 ] 

Leonard Xu commented on FLINK-14977:


Maybe this issue could close if no new subtask :D  [~dwysakowicz]

> Add informational primary key constraints in Table API
> --
>
> Key: FLINK-14977
> URL: https://issues.apache.org/jira/browse/FLINK-14977
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / API
>Reporter: Dawid Wysakowicz
>Assignee: Dawid Wysakowicz
>Priority: Major
>
> Corresponding flip: 
> https://cwiki.apache.org/confluence/display/FLINK/FLIP+87%3A+Primary+key+constraints+in+Table+API



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #11852: [FLINK-17300] Log the lineage information between ExecutionAttemptID …

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #11852:
URL: https://github.com/apache/flink/pull/11852#issuecomment-617579676


   
   ## CI report:
   
   * 677f94adc672e309a5863034c2348f0cf5a24dae Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3987)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] Leo1993Java commented on pull request #12759: [FLINK-18397][docs-zh] Translate "Table & SQL Connectors Overview" page into Chinese

2020-06-23 Thread GitBox


Leo1993Java commented on pull request #12759:
URL: https://github.com/apache/flink/pull/12759#issuecomment-648590848


   > Thanks for the contribution @Leo1993Java , I will have a loot at this 
later.
   > 
   > Btw, please use formal title for the pull request title and commit title, 
and please add content in the pull request description (not use the template 
with changes). You can read
   > 
https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications
 and https://flink.apache.org/zh/contributing/contribute-documentation.html for 
more details about how to contribute.
   
   @wuchong 
   Thank you for your questions and guidance. Next time I will submit it in a 
standard format.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] libenchao commented on pull request #12748: [FLINK-18324][docs-zh] Translate updated data type into Chinese

2020-06-23 Thread GitBox


libenchao commented on pull request #12748:
URL: https://github.com/apache/flink/pull/12748#issuecomment-648587190


   @liyubin117 Thanks for your contribution, I'll give it a deep review later 
today.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12760: [FLINK-18420][tests] Disable failed test SQLClientHBaseITCase in java 11

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12760:
URL: https://github.com/apache/flink/pull/12760#issuecomment-648576202


   
   ## CI report:
   
   * 952f6cc3e3433c75b016cd138a613704779cc41c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3988)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18278) Translate new documenation homepage

2020-06-23 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143518#comment-17143518
 ] 

Benchao Li edited comment on FLINK-18278 at 6/24/20, 5:00 AM:
--

[~sjwiesman] Is this issue a duplicate of 
https://issues.apache.org/jira/browse/FLINK-18282 ?

[~zhangzhanhua] I just noticed that this maybe a duplicate of another issue, 
which has been translated already. Sorry about this.


was (Author: libenchao):
[~sjwiesman] Is this issue is a duplicate of 
https://issues.apache.org/jira/browse/FLINK-18282 ?

[~zhangzhanhua] I just noticed that this maybe a duplicate of another issue, 
which has been translated already. Sorry about this.

> Translate new documenation homepage
> ---
>
> Key: FLINK-18278
> URL: https://issues.apache.org/jira/browse/FLINK-18278
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Reporter: Seth Wiesman
>Assignee: zhangzhanhua
>Priority: Major
>
> Sync changes with FLINK-17981



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18278) Translate new documenation homepage

2020-06-23 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143518#comment-17143518
 ] 

Benchao Li commented on FLINK-18278:


[~sjwiesman] Is this issue is a duplicate of 
https://issues.apache.org/jira/browse/FLINK-18282 ?

[~zhangzhanhua] I just noticed that this maybe a duplicate of another issue, 
which has been translated already. Sorry about this.

> Translate new documenation homepage
> ---
>
> Key: FLINK-18278
> URL: https://issues.apache.org/jira/browse/FLINK-18278
> Project: Flink
>  Issue Type: Improvement
>  Components: chinese-translation
>Reporter: Seth Wiesman
>Assignee: zhangzhanhua
>Priority: Major
>
> Sync changes with FLINK-17981



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12760: [FLINK-18420][tests] Disable failed test SQLClientHBaseITCase in java 11

2020-06-23 Thread GitBox


flinkbot commented on pull request #12760:
URL: https://github.com/apache/flink/pull/12760#issuecomment-648576202


   
   ## CI report:
   
   * 952f6cc3e3433c75b016cd138a613704779cc41c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 4dbc7b88a9fdf589c0c339378576cdda755fd77c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11852: [FLINK-17300] Log the lineage information between ExecutionAttemptID …

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #11852:
URL: https://github.com/apache/flink/pull/11852#issuecomment-617579676


   
   ## CI report:
   
   * 4e682b06757524da0011bf0b236c9b37158dcf80 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161377523) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=35)
 
   * 677f94adc672e309a5863034c2348f0cf5a24dae Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3987)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12760: [FLINK-18420][tests] Disable failed test SQLClientHBaseITCase in java 11

2020-06-23 Thread GitBox


flinkbot commented on pull request #12760:
URL: https://github.com/apache/flink/pull/12760#issuecomment-648572953


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 952f6cc3e3433c75b016cd138a613704779cc41c (Wed Jun 24 
04:14:40 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18420) SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of arguments (0 for 1)"

2020-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18420:
---
Labels: pull-request-available test-stability  (was: test-stability)

> SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of 
> arguments (0 for 1)"
> ---
>
> Key: FLINK-18420
> URL: https://issues.apache.org/jira/browse/FLINK-18420
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=6caf31d6-847a-526e-9624-468e053467d6
> {code}
> [INFO] Running org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9393979Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 43.277 s <<< FAILURE! - in 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9397602Z [ERROR] 
> testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase)  Time 
> elapsed: 43.276 s  <<< ERROR!
> 2020-06-23T23:07:01.9398196Z java.io.IOException: 
> 2020-06-23T23:07:01.9399343Z Process execution failed due error. Error 
> output:OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
> 2020-06-23T23:07:01.9400131Z WARNING: An illegal reflective access operation 
> has occurred
> 2020-06-23T23:07:01.9401440Z WARNING: Illegal reflective access by 
> org.jruby.java.invokers.RubyToJavaInvoker 
> (file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar) to 
> method java.lang.Object.registerNatives()
> 2020-06-23T23:07:01.9402282Z WARNING: Please consider reporting this to the 
> maintainers of org.jruby.java.invokers.RubyToJavaInvoker
> 2020-06-23T23:07:01.9403191Z WARNING: Use --illegal-access=warn to enable 
> warnings of further illegal reflective access operations
> 2020-06-23T23:07:01.9403798Z WARNING: All illegal access operations will be 
> denied in a future release
> 2020-06-23T23:07:01.9404516Z ArgumentError: wrong number of arguments (0 for 
> 1)
> 2020-06-23T23:07:01.9405477Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:10
> 2020-06-23T23:07:01.9406654Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:129
> 2020-06-23T23:07:01.9407831ZPattern at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:2
> 2020-06-23T23:07:01.9408979Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:1
> 2020-06-23T23:07:01.9409598Zrequire at org/jruby/RubyKernel.java:1062
> 2020-06-23T23:07:01.9410469Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:42
> 2020-06-23T23:07:01.9411122Z (root) at 
> /tmp/junit3131433123777334326/hbase/bin/../bin/hirb.rb:38
> 2020-06-23T23:07:01.9411481Z 
> 2020-06-23T23:07:01.9411996Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:127)
> 2020-06-23T23:07:01.9412745Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
> 2020-06-23T23:07:01.9413515Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:188)
> 2020-06-23T23:07:01.9414502Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:179)
> 2020-06-23T23:07:01.9415198Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.createTable(LocalStandaloneHBaseResource.java:158)
> 2020-06-23T23:07:01.9415865Z  at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:117)
> 2020-06-23T23:07:01.9416428Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-06-23T23:07:01.9416990Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-06-23T23:07:01.9417635Z  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-06-23T23:07:01.9419058Z  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2020-06-23T23:07:01.9420497Z  at 
> 

[GitHub] [flink] leonardBang opened a new pull request #12760: [FLINK-18420][tests] Disable failed test SQLClientHBaseITCase in java 11

2020-06-23 Thread GitBox


leonardBang opened a new pull request #12760:
URL: https://github.com/apache/flink/pull/12760


   ## What is the purpose of the change
   
   * This pull request disable SQLClientHBaseITCase in java 11. We only support 
HBase 1.4.3 which is not supported JDK11, skip the java 11 test for  e2e test.
   
   
   ## Brief change log
   
 - update file SQLClientHBaseITCase.java
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #12758: [FLINK-18386][docs-zh] Translate "Print SQL Connector" page into Chinese

2020-06-23 Thread GitBox


wuchong commented on pull request #12758:
URL: https://github.com/apache/flink/pull/12758#issuecomment-648572467


   cc @fsk119 , could you help to review this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11852: [FLINK-17300] Log the lineage information between ExecutionAttemptID …

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #11852:
URL: https://github.com/apache/flink/pull/11852#issuecomment-617579676


   
   ## CI report:
   
   * 4e682b06757524da0011bf0b236c9b37158dcf80 Travis: 
[SUCCESS](https://travis-ci.com/github/flink-ci/flink/builds/161377523) Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=35)
 
   * 677f94adc672e309a5863034c2348f0cf5a24dae UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong edited a comment on pull request #12759: [FLINK-18397],Hi Jark Wu。I have translated 'Translate 'Table & SQL Connectors Overview' page into Chinese' document,please check。by Le

2020-06-23 Thread GitBox


wuchong edited a comment on pull request #12759:
URL: https://github.com/apache/flink/pull/12759#issuecomment-648569854


   Thanks for the contribution @Leo1993Java , I will have a loot at this later. 
   
   Btw, please use formal title for the pull request title and commit title, 
and please add content in the pull request description (not use the template 
with changes). You can read  
   
https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications
 and https://flink.apache.org/zh/contributing/contribute-documentation.html for 
more details about how to contribute. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong commented on pull request #12759: [FLINK-18397],Hi Jark Wu。I have translated 'Translate 'Table & SQL Connectors Overview' page into Chinese' document,please check。by LeoGq。

2020-06-23 Thread GitBox


wuchong commented on pull request #12759:
URL: https://github.com/apache/flink/pull/12759#issuecomment-648569854


   Thanks for the contribution @Leo1993Java , I will have a loot at this later. 
   
   Btw, please use formal title for the pull request title and commit title. 
You can read  
   
https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications
 and https://flink.apache.org/zh/contributing/contribute-documentation.html for 
more details about how to contribute. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18371) NPE of "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"

2020-06-23 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143496#comment-17143496
 ] 

Jark Wu commented on FLINK-18371:
-

Ha! You are a step faster than me [~libenchao]. Thanks. 

> NPE of 
> "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"
> 
>
> Key: FLINK-18371
> URL: https://issues.apache.org/jira/browse/FLINK-18371
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: I use the sql-gateway to run this sql.
> The environment is streaming.
> *The sql is:*
> CREATE TABLE `src` (
>   key bigint,
>   v varchar
> ) WITH (
>   'connector'='filesystem',
>   'csv.field-delimiter'='|',
>   
> 'path'='/defender_test_data/daily_regression_stream_hive_1.10/test_cast/sources/src.csv',
>   'csv.null-literal'='',
>   'format'='csv'
> )
> select
> cast(key as decimal(10,2)) as c1,
> cast(key as char(10)) as c2,
> cast(key as varchar(10)) as c3
> from src
> order by c1, c2, c3
> limit 1
> *The input data is:*
> 238|val_238
> 86|val_86
> 311|val_311
> 27|val_27
> 165|val_165
> 409|val_409
> 255|val_255
> 278|val_278
> 98|val_98
> 484|val_484
> 265|val_265
> 193|val_193
> 401|val_401
> 150|val_150
> 273|val_273
> 224|val_224
> 369|val_369
> 66|val_66
> 128|val_128
> 213|val_213
> 146|val_146
> 406|val_406
> 429|val_429
> 374|val_374
> 152|val_152
> 469|val_469
> 145|val_145
> 495|val_495
> 37|val_37
> 327|val_327
> 281|val_281
> 277|val_277
> 209|val_209
> 15|val_15
> 82|val_82
> 403|val_403
> 166|val_166
> 417|val_417
> 430|val_430
> 252|val_252
> 292|val_292
> 219|val_219
> 287|val_287
> 153|val_153
> 193|val_193
> 338|val_338
> 446|val_446
> 459|val_459
> 394|val_394
> 237|val_237
> 482|val_482
> 174|val_174
> 413|val_413
> 494|val_494
> 207|val_207
> 199|val_199
> 466|val_466
> 208|val_208
> 174|val_174
> 399|val_399
> 396|val_396
> 247|val_247
> 417|val_417
> 489|val_489
> 162|val_162
> 377|val_377
> 397|val_397
> 309|val_309
> 365|val_365
> 266|val_266
> 439|val_439
> 342|val_342
> 367|val_367
> 325|val_325
> 167|val_167
> 195|val_195
> 475|val_475
> 17|val_17
> 113|val_113
> 155|val_155
> 203|val_203
> 339|val_339
> 0|val_0
> 455|val_455
> 128|val_128
> 311|val_311
> 316|val_316
> 57|val_57
> 302|val_302
> 205|val_205
> 149|val_149
> 438|val_438
> 345|val_345
> 129|val_129
> 170|val_170
> 20|val_20
> 489|val_489
> 157|val_157
> 378|val_378
> 221|val_221
> 92|val_92
> 111|val_111
> 47|val_47
> 72|val_72
> 4|val_4
> 280|val_280
> 35|val_35
> 427|val_427
> 277|val_277
> 208|val_208
> 356|val_356
> 399|val_399
> 169|val_169
> 382|val_382
> 498|val_498
> 125|val_125
> 386|val_386
> 437|val_437
> 469|val_469
> 192|val_192
> 286|val_286
> 187|val_187
> 176|val_176
> 54|val_54
> 459|val_459
> 51|val_51
> 138|val_138
> 103|val_103
> 239|val_239
> 213|val_213
> 216|val_216
> 430|val_430
> 278|val_278
> 176|val_176
> 289|val_289
> 221|val_221
> 65|val_65
> 318|val_318
> 332|val_332
> 311|val_311
> 275|val_275
> 137|val_137
> 241|val_241
> 83|val_83
> 333|val_333
> 180|val_180
> 284|val_284
> 12|val_12
> 230|val_230
> 181|val_181
> 67|val_67
> 260|val_260
> 404|val_404
> 384|val_384
> 489|val_489
> 353|val_353
> 373|val_373
> 272|val_272
> 138|val_138
> 217|val_217
> 84|val_84
> 348|val_348
> 466|val_466
> 58|val_58
> 8|val_8
> 411|val_411
> 230|val_230
> 208|val_208
> 348|val_348
> 24|val_24
> 463|val_463
> 431|val_431
> 179|val_179
> 172|val_172
> 42|val_42
> 129|val_129
> 158|val_158
> 119|val_119
> 496|val_496
> 0|val_0
> 322|val_322
> 197|val_197
> 468|val_468
> 393|val_393
> 454|val_454
> 100|val_100
> 298|val_298
> 199|val_199
> 191|val_191
> 418|val_418
> 96|val_96
> 26|val_26
> 165|val_165
> 327|val_327
> 230|val_230
> 205|val_205
> 120|val_120
> 131|val_131
> 51|val_51
> 404|val_404
> 43|val_43
> 436|val_436
> 156|val_156
> 469|val_469
> 468|val_468
> 308|val_308
> 95|val_95
> 196|val_196
> 288|val_288
> 481|val_481
> 457|val_457
> 98|val_98
> 282|val_282
> 197|val_197
> 187|val_187
> 318|val_318
> 318|val_318
> 409|val_409
> 470|val_470
> 137|val_137
> 369|val_369
> 316|val_316
> 169|val_169
> 413|val_413
> 85|val_85
> 77|val_77
> 0|val_0
> 490|val_490
> 87|val_87
> 364|val_364
> 179|val_179
> 118|val_118
> 134|val_134
> 395|val_395
> 282|val_282
> 138|val_138
> 238|val_238
> 419|val_419
> 15|val_15
> 118|val_118
> 72|val_72
> 90|val_90
> 307|val_307
> 19|val_19
> 435|val_435
> 10|val_10
> 277|val_277
> 273|val_273
> 306|val_306
> 224|val_224
> 309|val_309
> 389|val_389
> 327|val_327
> 242|val_242
> 369|val_369
> 392|val_392
> 272|val_272
> 331|val_331
> 401|val_401
> 

[jira] [Commented] (FLINK-18371) NPE of "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"

2020-06-23 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143493#comment-17143493
 ] 

Jark Wu commented on FLINK-18371:
-

Thanks [~Leonard Xu] and [~caoshaokan] for verifying this. As it can't be 
reproduced and looks like a bug in the sql-gateway side. So I will close this 
issue. Please feel free to reopen this if you thought this is a bug in Flink. 

> NPE of 
> "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"
> 
>
> Key: FLINK-18371
> URL: https://issues.apache.org/jira/browse/FLINK-18371
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: I use the sql-gateway to run this sql.
> The environment is streaming.
> *The sql is:*
> CREATE TABLE `src` (
>   key bigint,
>   v varchar
> ) WITH (
>   'connector'='filesystem',
>   'csv.field-delimiter'='|',
>   
> 'path'='/defender_test_data/daily_regression_stream_hive_1.10/test_cast/sources/src.csv',
>   'csv.null-literal'='',
>   'format'='csv'
> )
> select
> cast(key as decimal(10,2)) as c1,
> cast(key as char(10)) as c2,
> cast(key as varchar(10)) as c3
> from src
> order by c1, c2, c3
> limit 1
> *The input data is:*
> 238|val_238
> 86|val_86
> 311|val_311
> 27|val_27
> 165|val_165
> 409|val_409
> 255|val_255
> 278|val_278
> 98|val_98
> 484|val_484
> 265|val_265
> 193|val_193
> 401|val_401
> 150|val_150
> 273|val_273
> 224|val_224
> 369|val_369
> 66|val_66
> 128|val_128
> 213|val_213
> 146|val_146
> 406|val_406
> 429|val_429
> 374|val_374
> 152|val_152
> 469|val_469
> 145|val_145
> 495|val_495
> 37|val_37
> 327|val_327
> 281|val_281
> 277|val_277
> 209|val_209
> 15|val_15
> 82|val_82
> 403|val_403
> 166|val_166
> 417|val_417
> 430|val_430
> 252|val_252
> 292|val_292
> 219|val_219
> 287|val_287
> 153|val_153
> 193|val_193
> 338|val_338
> 446|val_446
> 459|val_459
> 394|val_394
> 237|val_237
> 482|val_482
> 174|val_174
> 413|val_413
> 494|val_494
> 207|val_207
> 199|val_199
> 466|val_466
> 208|val_208
> 174|val_174
> 399|val_399
> 396|val_396
> 247|val_247
> 417|val_417
> 489|val_489
> 162|val_162
> 377|val_377
> 397|val_397
> 309|val_309
> 365|val_365
> 266|val_266
> 439|val_439
> 342|val_342
> 367|val_367
> 325|val_325
> 167|val_167
> 195|val_195
> 475|val_475
> 17|val_17
> 113|val_113
> 155|val_155
> 203|val_203
> 339|val_339
> 0|val_0
> 455|val_455
> 128|val_128
> 311|val_311
> 316|val_316
> 57|val_57
> 302|val_302
> 205|val_205
> 149|val_149
> 438|val_438
> 345|val_345
> 129|val_129
> 170|val_170
> 20|val_20
> 489|val_489
> 157|val_157
> 378|val_378
> 221|val_221
> 92|val_92
> 111|val_111
> 47|val_47
> 72|val_72
> 4|val_4
> 280|val_280
> 35|val_35
> 427|val_427
> 277|val_277
> 208|val_208
> 356|val_356
> 399|val_399
> 169|val_169
> 382|val_382
> 498|val_498
> 125|val_125
> 386|val_386
> 437|val_437
> 469|val_469
> 192|val_192
> 286|val_286
> 187|val_187
> 176|val_176
> 54|val_54
> 459|val_459
> 51|val_51
> 138|val_138
> 103|val_103
> 239|val_239
> 213|val_213
> 216|val_216
> 430|val_430
> 278|val_278
> 176|val_176
> 289|val_289
> 221|val_221
> 65|val_65
> 318|val_318
> 332|val_332
> 311|val_311
> 275|val_275
> 137|val_137
> 241|val_241
> 83|val_83
> 333|val_333
> 180|val_180
> 284|val_284
> 12|val_12
> 230|val_230
> 181|val_181
> 67|val_67
> 260|val_260
> 404|val_404
> 384|val_384
> 489|val_489
> 353|val_353
> 373|val_373
> 272|val_272
> 138|val_138
> 217|val_217
> 84|val_84
> 348|val_348
> 466|val_466
> 58|val_58
> 8|val_8
> 411|val_411
> 230|val_230
> 208|val_208
> 348|val_348
> 24|val_24
> 463|val_463
> 431|val_431
> 179|val_179
> 172|val_172
> 42|val_42
> 129|val_129
> 158|val_158
> 119|val_119
> 496|val_496
> 0|val_0
> 322|val_322
> 197|val_197
> 468|val_468
> 393|val_393
> 454|val_454
> 100|val_100
> 298|val_298
> 199|val_199
> 191|val_191
> 418|val_418
> 96|val_96
> 26|val_26
> 165|val_165
> 327|val_327
> 230|val_230
> 205|val_205
> 120|val_120
> 131|val_131
> 51|val_51
> 404|val_404
> 43|val_43
> 436|val_436
> 156|val_156
> 469|val_469
> 468|val_468
> 308|val_308
> 95|val_95
> 196|val_196
> 288|val_288
> 481|val_481
> 457|val_457
> 98|val_98
> 282|val_282
> 197|val_197
> 187|val_187
> 318|val_318
> 318|val_318
> 409|val_409
> 470|val_470
> 137|val_137
> 369|val_369
> 316|val_316
> 169|val_169
> 413|val_413
> 85|val_85
> 77|val_77
> 0|val_0
> 490|val_490
> 87|val_87
> 364|val_364
> 179|val_179
> 118|val_118
> 134|val_134
> 395|val_395
> 282|val_282
> 138|val_138
> 238|val_238
> 419|val_419
> 15|val_15
> 118|val_118
> 72|val_72
> 90|val_90
> 307|val_307
> 19|val_19
> 435|val_435
> 10|val_10
> 

[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 62a088d537479bbf72b6ee8d2c1852d720eac913 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874)
 
   * 4dbc7b88a9fdf589c0c339378576cdda755fd77c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18371) NPE of "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"

2020-06-23 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143489#comment-17143489
 ] 

Benchao Li commented on FLINK-18371:


Since we cannot reproduce this in 1.11 & master branch, I prefer to close it 
for now. CC [~jark] 

Free free to reopen it if anyone could reproduce it.

> NPE of 
> "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"
> 
>
> Key: FLINK-18371
> URL: https://issues.apache.org/jira/browse/FLINK-18371
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: I use the sql-gateway to run this sql.
> The environment is streaming.
> *The sql is:*
> CREATE TABLE `src` (
>   key bigint,
>   v varchar
> ) WITH (
>   'connector'='filesystem',
>   'csv.field-delimiter'='|',
>   
> 'path'='/defender_test_data/daily_regression_stream_hive_1.10/test_cast/sources/src.csv',
>   'csv.null-literal'='',
>   'format'='csv'
> )
> select
> cast(key as decimal(10,2)) as c1,
> cast(key as char(10)) as c2,
> cast(key as varchar(10)) as c3
> from src
> order by c1, c2, c3
> limit 1
> *The input data is:*
> 238|val_238
> 86|val_86
> 311|val_311
> 27|val_27
> 165|val_165
> 409|val_409
> 255|val_255
> 278|val_278
> 98|val_98
> 484|val_484
> 265|val_265
> 193|val_193
> 401|val_401
> 150|val_150
> 273|val_273
> 224|val_224
> 369|val_369
> 66|val_66
> 128|val_128
> 213|val_213
> 146|val_146
> 406|val_406
> 429|val_429
> 374|val_374
> 152|val_152
> 469|val_469
> 145|val_145
> 495|val_495
> 37|val_37
> 327|val_327
> 281|val_281
> 277|val_277
> 209|val_209
> 15|val_15
> 82|val_82
> 403|val_403
> 166|val_166
> 417|val_417
> 430|val_430
> 252|val_252
> 292|val_292
> 219|val_219
> 287|val_287
> 153|val_153
> 193|val_193
> 338|val_338
> 446|val_446
> 459|val_459
> 394|val_394
> 237|val_237
> 482|val_482
> 174|val_174
> 413|val_413
> 494|val_494
> 207|val_207
> 199|val_199
> 466|val_466
> 208|val_208
> 174|val_174
> 399|val_399
> 396|val_396
> 247|val_247
> 417|val_417
> 489|val_489
> 162|val_162
> 377|val_377
> 397|val_397
> 309|val_309
> 365|val_365
> 266|val_266
> 439|val_439
> 342|val_342
> 367|val_367
> 325|val_325
> 167|val_167
> 195|val_195
> 475|val_475
> 17|val_17
> 113|val_113
> 155|val_155
> 203|val_203
> 339|val_339
> 0|val_0
> 455|val_455
> 128|val_128
> 311|val_311
> 316|val_316
> 57|val_57
> 302|val_302
> 205|val_205
> 149|val_149
> 438|val_438
> 345|val_345
> 129|val_129
> 170|val_170
> 20|val_20
> 489|val_489
> 157|val_157
> 378|val_378
> 221|val_221
> 92|val_92
> 111|val_111
> 47|val_47
> 72|val_72
> 4|val_4
> 280|val_280
> 35|val_35
> 427|val_427
> 277|val_277
> 208|val_208
> 356|val_356
> 399|val_399
> 169|val_169
> 382|val_382
> 498|val_498
> 125|val_125
> 386|val_386
> 437|val_437
> 469|val_469
> 192|val_192
> 286|val_286
> 187|val_187
> 176|val_176
> 54|val_54
> 459|val_459
> 51|val_51
> 138|val_138
> 103|val_103
> 239|val_239
> 213|val_213
> 216|val_216
> 430|val_430
> 278|val_278
> 176|val_176
> 289|val_289
> 221|val_221
> 65|val_65
> 318|val_318
> 332|val_332
> 311|val_311
> 275|val_275
> 137|val_137
> 241|val_241
> 83|val_83
> 333|val_333
> 180|val_180
> 284|val_284
> 12|val_12
> 230|val_230
> 181|val_181
> 67|val_67
> 260|val_260
> 404|val_404
> 384|val_384
> 489|val_489
> 353|val_353
> 373|val_373
> 272|val_272
> 138|val_138
> 217|val_217
> 84|val_84
> 348|val_348
> 466|val_466
> 58|val_58
> 8|val_8
> 411|val_411
> 230|val_230
> 208|val_208
> 348|val_348
> 24|val_24
> 463|val_463
> 431|val_431
> 179|val_179
> 172|val_172
> 42|val_42
> 129|val_129
> 158|val_158
> 119|val_119
> 496|val_496
> 0|val_0
> 322|val_322
> 197|val_197
> 468|val_468
> 393|val_393
> 454|val_454
> 100|val_100
> 298|val_298
> 199|val_199
> 191|val_191
> 418|val_418
> 96|val_96
> 26|val_26
> 165|val_165
> 327|val_327
> 230|val_230
> 205|val_205
> 120|val_120
> 131|val_131
> 51|val_51
> 404|val_404
> 43|val_43
> 436|val_436
> 156|val_156
> 469|val_469
> 468|val_468
> 308|val_308
> 95|val_95
> 196|val_196
> 288|val_288
> 481|val_481
> 457|val_457
> 98|val_98
> 282|val_282
> 197|val_197
> 187|val_187
> 318|val_318
> 318|val_318
> 409|val_409
> 470|val_470
> 137|val_137
> 369|val_369
> 316|val_316
> 169|val_169
> 413|val_413
> 85|val_85
> 77|val_77
> 0|val_0
> 490|val_490
> 87|val_87
> 364|val_364
> 179|val_179
> 118|val_118
> 134|val_134
> 395|val_395
> 282|val_282
> 138|val_138
> 238|val_238
> 419|val_419
> 15|val_15
> 118|val_118
> 72|val_72
> 90|val_90
> 307|val_307
> 19|val_19
> 435|val_435
> 10|val_10
> 277|val_277
> 273|val_273
> 306|val_306
> 224|val_224
> 309|val_309
> 389|val_389

[jira] [Closed] (FLINK-18371) NPE of "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"

2020-06-23 Thread Benchao Li (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benchao Li closed FLINK-18371.
--
Fix Version/s: (was: 1.11.0)
   Resolution: Cannot Reproduce

> NPE of 
> "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"
> 
>
> Key: FLINK-18371
> URL: https://issues.apache.org/jira/browse/FLINK-18371
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: I use the sql-gateway to run this sql.
> The environment is streaming.
> *The sql is:*
> CREATE TABLE `src` (
>   key bigint,
>   v varchar
> ) WITH (
>   'connector'='filesystem',
>   'csv.field-delimiter'='|',
>   
> 'path'='/defender_test_data/daily_regression_stream_hive_1.10/test_cast/sources/src.csv',
>   'csv.null-literal'='',
>   'format'='csv'
> )
> select
> cast(key as decimal(10,2)) as c1,
> cast(key as char(10)) as c2,
> cast(key as varchar(10)) as c3
> from src
> order by c1, c2, c3
> limit 1
> *The input data is:*
> 238|val_238
> 86|val_86
> 311|val_311
> 27|val_27
> 165|val_165
> 409|val_409
> 255|val_255
> 278|val_278
> 98|val_98
> 484|val_484
> 265|val_265
> 193|val_193
> 401|val_401
> 150|val_150
> 273|val_273
> 224|val_224
> 369|val_369
> 66|val_66
> 128|val_128
> 213|val_213
> 146|val_146
> 406|val_406
> 429|val_429
> 374|val_374
> 152|val_152
> 469|val_469
> 145|val_145
> 495|val_495
> 37|val_37
> 327|val_327
> 281|val_281
> 277|val_277
> 209|val_209
> 15|val_15
> 82|val_82
> 403|val_403
> 166|val_166
> 417|val_417
> 430|val_430
> 252|val_252
> 292|val_292
> 219|val_219
> 287|val_287
> 153|val_153
> 193|val_193
> 338|val_338
> 446|val_446
> 459|val_459
> 394|val_394
> 237|val_237
> 482|val_482
> 174|val_174
> 413|val_413
> 494|val_494
> 207|val_207
> 199|val_199
> 466|val_466
> 208|val_208
> 174|val_174
> 399|val_399
> 396|val_396
> 247|val_247
> 417|val_417
> 489|val_489
> 162|val_162
> 377|val_377
> 397|val_397
> 309|val_309
> 365|val_365
> 266|val_266
> 439|val_439
> 342|val_342
> 367|val_367
> 325|val_325
> 167|val_167
> 195|val_195
> 475|val_475
> 17|val_17
> 113|val_113
> 155|val_155
> 203|val_203
> 339|val_339
> 0|val_0
> 455|val_455
> 128|val_128
> 311|val_311
> 316|val_316
> 57|val_57
> 302|val_302
> 205|val_205
> 149|val_149
> 438|val_438
> 345|val_345
> 129|val_129
> 170|val_170
> 20|val_20
> 489|val_489
> 157|val_157
> 378|val_378
> 221|val_221
> 92|val_92
> 111|val_111
> 47|val_47
> 72|val_72
> 4|val_4
> 280|val_280
> 35|val_35
> 427|val_427
> 277|val_277
> 208|val_208
> 356|val_356
> 399|val_399
> 169|val_169
> 382|val_382
> 498|val_498
> 125|val_125
> 386|val_386
> 437|val_437
> 469|val_469
> 192|val_192
> 286|val_286
> 187|val_187
> 176|val_176
> 54|val_54
> 459|val_459
> 51|val_51
> 138|val_138
> 103|val_103
> 239|val_239
> 213|val_213
> 216|val_216
> 430|val_430
> 278|val_278
> 176|val_176
> 289|val_289
> 221|val_221
> 65|val_65
> 318|val_318
> 332|val_332
> 311|val_311
> 275|val_275
> 137|val_137
> 241|val_241
> 83|val_83
> 333|val_333
> 180|val_180
> 284|val_284
> 12|val_12
> 230|val_230
> 181|val_181
> 67|val_67
> 260|val_260
> 404|val_404
> 384|val_384
> 489|val_489
> 353|val_353
> 373|val_373
> 272|val_272
> 138|val_138
> 217|val_217
> 84|val_84
> 348|val_348
> 466|val_466
> 58|val_58
> 8|val_8
> 411|val_411
> 230|val_230
> 208|val_208
> 348|val_348
> 24|val_24
> 463|val_463
> 431|val_431
> 179|val_179
> 172|val_172
> 42|val_42
> 129|val_129
> 158|val_158
> 119|val_119
> 496|val_496
> 0|val_0
> 322|val_322
> 197|val_197
> 468|val_468
> 393|val_393
> 454|val_454
> 100|val_100
> 298|val_298
> 199|val_199
> 191|val_191
> 418|val_418
> 96|val_96
> 26|val_26
> 165|val_165
> 327|val_327
> 230|val_230
> 205|val_205
> 120|val_120
> 131|val_131
> 51|val_51
> 404|val_404
> 43|val_43
> 436|val_436
> 156|val_156
> 469|val_469
> 468|val_468
> 308|val_308
> 95|val_95
> 196|val_196
> 288|val_288
> 481|val_481
> 457|val_457
> 98|val_98
> 282|val_282
> 197|val_197
> 187|val_187
> 318|val_318
> 318|val_318
> 409|val_409
> 470|val_470
> 137|val_137
> 369|val_369
> 316|val_316
> 169|val_169
> 413|val_413
> 85|val_85
> 77|val_77
> 0|val_0
> 490|val_490
> 87|val_87
> 364|val_364
> 179|val_179
> 118|val_118
> 134|val_134
> 395|val_395
> 282|val_282
> 138|val_138
> 238|val_238
> 419|val_419
> 15|val_15
> 118|val_118
> 72|val_72
> 90|val_90
> 307|val_307
> 19|val_19
> 435|val_435
> 10|val_10
> 277|val_277
> 273|val_273
> 306|val_306
> 224|val_224
> 309|val_309
> 389|val_389
> 327|val_327
> 242|val_242
> 369|val_369
> 392|val_392
> 272|val_272
> 331|val_331
> 401|val_401
> 242|val_242
> 

[GitHub] [flink] KarmaGYZ commented on pull request #11852: [FLINK-17300] Log the lineage information between ExecutionAttemptID …

2020-06-23 Thread GitBox


KarmaGYZ commented on pull request #11852:
URL: https://github.com/apache/flink/pull/11852#issuecomment-648563220


   Thanks for the review, @tillrohrmann . I think it would be good to also add 
test to `Execution`. I've updated the PR and rebased it on the lastest master.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-15693) Stop receiving incoming RPC messages when RpcEndpoint is closing

2020-06-23 Thread MinWang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-15693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143488#comment-17143488
 ] 

MinWang commented on FLINK-15693:
-

Yes! I think I have missed that. I will check that as soon as possible. 

> Stop receiving incoming RPC messages when RpcEndpoint is closing
> 
>
> Key: FLINK-15693
> URL: https://issues.apache.org/jira/browse/FLINK-15693
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Coordination
>Affects Versions: 1.9.1, 1.10.0
>Reporter: Till Rohrmann
>Priority: Major
> Fix For: 1.11.0
>
>
> When calling {{RpcEndpoint#closeAsync()}}, the system triggers 
> {{RpcEndpoint#onStop}} and transitions the endpoint into the 
> {{TerminatingState}}. In order to allow asynchronous clean up operations, the 
> main thread executor is not shut down immediately. As a side effect, the 
> {{RpcEndpoint}} still accepts incoming RPC messages from other components. 
> I think it would be cleaner to no longer accept incoming RPC messages once we 
> are in the {{TerminatingState}}. That way we would not worry about the 
> internal state of the {{RpcEndpoint}} when processing RPC messages (similar 
> to 
> [here|https://github.com/apache/flink/blob/master/flink-runtime/src/main/java/org/apache/flink/runtime/taskexecutor/TaskExecutor.java#L952]).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18423) Fix Prefer tag in document "Detecting Patterns" page of "Streaming Concepts"

2020-06-23 Thread Roc Marshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143482#comment-17143482
 ] 

Roc Marshal commented on FLINK-18423:
-

Hi, [~alpinegizmo].

Could you assign this to me?

Thank you.

> Fix Prefer tag in document "Detecting Patterns" page of "Streaming Concepts"
> 
>
> Key: FLINK-18423
> URL: https://issues.apache.org/jira/browse/FLINK-18423
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.10.0, 1.10.1
>Reporter: Roc Marshal
>Priority: Minor
>  Labels: document, easyfix
>
>  Update Prefer tag in documentation "Detecting Patterns" page of "Streaming 
> Concepts" according to 
> [Prefer|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Reminder-Prefer-link-tag-in-documentation-td42362.html].
>   
> The markdown file location is:  
> flink/docs/dev/table/streaming/match_recognize.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18423) Fix Prefer tag in document "Detecting Patterns" page of "Streaming Concepts"

2020-06-23 Thread Roc Marshal (Jira)
Roc Marshal created FLINK-18423:
---

 Summary: Fix Prefer tag in document "Detecting Patterns" page of 
"Streaming Concepts"
 Key: FLINK-18423
 URL: https://issues.apache.org/jira/browse/FLINK-18423
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Documentation / Training
Affects Versions: 1.10.1, 1.10.0
Reporter: Roc Marshal


 Update Prefer tag in documentation "Detecting Patterns" page of "Streaming 
Concepts" according to 
[Prefer|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Reminder-Prefer-link-tag-in-documentation-td42362.html].
  

The markdown file location is:  
flink/docs/dev/table/streaming/match_recognize.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] ysn2233 commented on pull request #12726: [FLINK-14938][Connectors / ElasticSearch] Fix ConcurrentModificationException when Flink elasticsearch failure handler re-add indexrequest

2020-06-23 Thread GitBox


ysn2233 commented on pull request #12726:
URL: https://github.com/apache/flink/pull/12726#issuecomment-648556586


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 62a088d537479bbf72b6ee8d2c1852d720eac913 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874)
 
   * 4dbc7b88a9fdf589c0c339378576cdda755fd77c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18420) SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of arguments (0 for 1)"

2020-06-23 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143472#comment-17143472
 ] 

Dian Fu commented on FLINK-18420:
-

[~Leonard Xu] Thanks for taking this issue. Have assigned this issue to you.

> SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of 
> arguments (0 for 1)"
> ---
>
> Key: FLINK-18420
> URL: https://issues.apache.org/jira/browse/FLINK-18420
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=6caf31d6-847a-526e-9624-468e053467d6
> {code}
> [INFO] Running org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9393979Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 43.277 s <<< FAILURE! - in 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9397602Z [ERROR] 
> testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase)  Time 
> elapsed: 43.276 s  <<< ERROR!
> 2020-06-23T23:07:01.9398196Z java.io.IOException: 
> 2020-06-23T23:07:01.9399343Z Process execution failed due error. Error 
> output:OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
> 2020-06-23T23:07:01.9400131Z WARNING: An illegal reflective access operation 
> has occurred
> 2020-06-23T23:07:01.9401440Z WARNING: Illegal reflective access by 
> org.jruby.java.invokers.RubyToJavaInvoker 
> (file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar) to 
> method java.lang.Object.registerNatives()
> 2020-06-23T23:07:01.9402282Z WARNING: Please consider reporting this to the 
> maintainers of org.jruby.java.invokers.RubyToJavaInvoker
> 2020-06-23T23:07:01.9403191Z WARNING: Use --illegal-access=warn to enable 
> warnings of further illegal reflective access operations
> 2020-06-23T23:07:01.9403798Z WARNING: All illegal access operations will be 
> denied in a future release
> 2020-06-23T23:07:01.9404516Z ArgumentError: wrong number of arguments (0 for 
> 1)
> 2020-06-23T23:07:01.9405477Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:10
> 2020-06-23T23:07:01.9406654Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:129
> 2020-06-23T23:07:01.9407831ZPattern at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:2
> 2020-06-23T23:07:01.9408979Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:1
> 2020-06-23T23:07:01.9409598Zrequire at org/jruby/RubyKernel.java:1062
> 2020-06-23T23:07:01.9410469Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:42
> 2020-06-23T23:07:01.9411122Z (root) at 
> /tmp/junit3131433123777334326/hbase/bin/../bin/hirb.rb:38
> 2020-06-23T23:07:01.9411481Z 
> 2020-06-23T23:07:01.9411996Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:127)
> 2020-06-23T23:07:01.9412745Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
> 2020-06-23T23:07:01.9413515Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:188)
> 2020-06-23T23:07:01.9414502Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:179)
> 2020-06-23T23:07:01.9415198Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.createTable(LocalStandaloneHBaseResource.java:158)
> 2020-06-23T23:07:01.9415865Z  at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:117)
> 2020-06-23T23:07:01.9416428Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-06-23T23:07:01.9416990Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-06-23T23:07:01.9417635Z  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-06-23T23:07:01.9419058Z  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2020-06-23T23:07:01.9420497Z  

[jira] [Assigned] (FLINK-18420) SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of arguments (0 for 1)"

2020-06-23 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu reassigned FLINK-18420:
---

Assignee: Leonard Xu

> SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of 
> arguments (0 for 1)"
> ---
>
> Key: FLINK-18420
> URL: https://issues.apache.org/jira/browse/FLINK-18420
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=6caf31d6-847a-526e-9624-468e053467d6
> {code}
> [INFO] Running org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9393979Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 43.277 s <<< FAILURE! - in 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9397602Z [ERROR] 
> testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase)  Time 
> elapsed: 43.276 s  <<< ERROR!
> 2020-06-23T23:07:01.9398196Z java.io.IOException: 
> 2020-06-23T23:07:01.9399343Z Process execution failed due error. Error 
> output:OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
> 2020-06-23T23:07:01.9400131Z WARNING: An illegal reflective access operation 
> has occurred
> 2020-06-23T23:07:01.9401440Z WARNING: Illegal reflective access by 
> org.jruby.java.invokers.RubyToJavaInvoker 
> (file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar) to 
> method java.lang.Object.registerNatives()
> 2020-06-23T23:07:01.9402282Z WARNING: Please consider reporting this to the 
> maintainers of org.jruby.java.invokers.RubyToJavaInvoker
> 2020-06-23T23:07:01.9403191Z WARNING: Use --illegal-access=warn to enable 
> warnings of further illegal reflective access operations
> 2020-06-23T23:07:01.9403798Z WARNING: All illegal access operations will be 
> denied in a future release
> 2020-06-23T23:07:01.9404516Z ArgumentError: wrong number of arguments (0 for 
> 1)
> 2020-06-23T23:07:01.9405477Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:10
> 2020-06-23T23:07:01.9406654Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:129
> 2020-06-23T23:07:01.9407831ZPattern at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:2
> 2020-06-23T23:07:01.9408979Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:1
> 2020-06-23T23:07:01.9409598Zrequire at org/jruby/RubyKernel.java:1062
> 2020-06-23T23:07:01.9410469Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:42
> 2020-06-23T23:07:01.9411122Z (root) at 
> /tmp/junit3131433123777334326/hbase/bin/../bin/hirb.rb:38
> 2020-06-23T23:07:01.9411481Z 
> 2020-06-23T23:07:01.9411996Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:127)
> 2020-06-23T23:07:01.9412745Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
> 2020-06-23T23:07:01.9413515Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:188)
> 2020-06-23T23:07:01.9414502Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:179)
> 2020-06-23T23:07:01.9415198Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.createTable(LocalStandaloneHBaseResource.java:158)
> 2020-06-23T23:07:01.9415865Z  at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:117)
> 2020-06-23T23:07:01.9416428Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-06-23T23:07:01.9416990Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-06-23T23:07:01.9417635Z  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-06-23T23:07:01.9419058Z  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2020-06-23T23:07:01.9420497Z  at 
> 

[jira] [Commented] (FLINK-18420) SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of arguments (0 for 1)"

2020-06-23 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143471#comment-17143471
 ] 

Leonard Xu commented on FLINK-18420:


Thanks for the report.

The test failed in JDK11 profile, the reason is we only support HBase 1.4.3 
which is not supported JDK11
 I'd like to fix this, [~dian.fu] could help assign the ticket to me ?

[1] [https://hbase.apache.org/book.html#basic.prerequisites]
 [2] https://issues.apache.org/jira/browse/HBASE-22972

> SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of 
> arguments (0 for 1)"
> ---
>
> Key: FLINK-18420
> URL: https://issues.apache.org/jira/browse/FLINK-18420
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=6caf31d6-847a-526e-9624-468e053467d6
> {code}
> [INFO] Running org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9393979Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 43.277 s <<< FAILURE! - in 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9397602Z [ERROR] 
> testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase)  Time 
> elapsed: 43.276 s  <<< ERROR!
> 2020-06-23T23:07:01.9398196Z java.io.IOException: 
> 2020-06-23T23:07:01.9399343Z Process execution failed due error. Error 
> output:OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
> 2020-06-23T23:07:01.9400131Z WARNING: An illegal reflective access operation 
> has occurred
> 2020-06-23T23:07:01.9401440Z WARNING: Illegal reflective access by 
> org.jruby.java.invokers.RubyToJavaInvoker 
> (file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar) to 
> method java.lang.Object.registerNatives()
> 2020-06-23T23:07:01.9402282Z WARNING: Please consider reporting this to the 
> maintainers of org.jruby.java.invokers.RubyToJavaInvoker
> 2020-06-23T23:07:01.9403191Z WARNING: Use --illegal-access=warn to enable 
> warnings of further illegal reflective access operations
> 2020-06-23T23:07:01.9403798Z WARNING: All illegal access operations will be 
> denied in a future release
> 2020-06-23T23:07:01.9404516Z ArgumentError: wrong number of arguments (0 for 
> 1)
> 2020-06-23T23:07:01.9405477Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:10
> 2020-06-23T23:07:01.9406654Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:129
> 2020-06-23T23:07:01.9407831ZPattern at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:2
> 2020-06-23T23:07:01.9408979Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:1
> 2020-06-23T23:07:01.9409598Zrequire at org/jruby/RubyKernel.java:1062
> 2020-06-23T23:07:01.9410469Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:42
> 2020-06-23T23:07:01.9411122Z (root) at 
> /tmp/junit3131433123777334326/hbase/bin/../bin/hirb.rb:38
> 2020-06-23T23:07:01.9411481Z 
> 2020-06-23T23:07:01.9411996Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:127)
> 2020-06-23T23:07:01.9412745Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
> 2020-06-23T23:07:01.9413515Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:188)
> 2020-06-23T23:07:01.9414502Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:179)
> 2020-06-23T23:07:01.9415198Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.createTable(LocalStandaloneHBaseResource.java:158)
> 2020-06-23T23:07:01.9415865Z  at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:117)
> 2020-06-23T23:07:01.9416428Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-06-23T23:07:01.9416990Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-06-23T23:07:01.9417635Z  at 
> 

[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444610772



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -189,50 +183,43 @@ val result = orders
 
 
 
-**Note**: State retention defined in a [query 
configuration](query_configuration.html) is not yet implemented for temporal 
joins.
-This means that the required state to compute the query result might grow 
infinitely depending on the number of distinct primary keys for the history 
table.
+**注意**: 时态 Join中的 State 保留(在 [查询配置](query_configuration.html) 
中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
+
+### 基于 Processing-time 时态 Join
 
-### Processing-time Temporal Joins
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给时态表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 
的时态表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-With a processing-time time attribute, it is impossible to pass _past_ time 
attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a 
processing-time temporal table function will always return the latest known 
versions of the underlying table
-and any updates in the underlying history table will also immediately 
overwrite the current values.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-Only the latest versions (with respect to the defined primary key) of the 
build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join 
results.
+可以将 processing-time 的时态 Join 视作简单的哈希Map `HashMap `,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-One can think about a processing-time temporal join as a simple `HashMap` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous 
record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most 
recent/current state of the `HashMap`.
+### 基于 Event-time 时态 Join
 
-### Event-time Temporal Joins
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给时态表函数。这允许对两个表中在相同时间点的记录执行 Join 操作。
 
-With an event-time time attribute (i.e., a rowtime attribute), it is possible 
to pass _past_ time attributes to the temporal table function.
-This allows for joining the two tables at a common point in time.
+与基于 processing-time 的时态 Join 相比,时态表不仅将构建侧记录的最新版本(是否最新由所定义的主键所决定)保存在 state 
中,同时也会存储自上一个 watermarks 以来的所有版本(按时间区分)。
 
-Compared to processing-time temporal joins, the temporal table does not only 
keep the latest version (with respect to the defined primary key) of the build 
side records in the state
-but stores all versions (identified by time) since the last watermark.
+例如,在探针侧表新插入一条 event-time 时间为 `12:30:00` 的记录,它将和构建侧表时间点为 `12:30:00` 
的版本根据[时态表的概念](temporal_tables.html)进行 Join 运算。
+因此,新插入的记录仅与时间戳小于等于 `12:30:00` 的记录进行 Join 计算(由主键决定哪些时间点的数据将参与计算)。
 
-For example, an incoming row with an event-time timestamp of `12:30:00` that 
is appended to the probe side table
-is joined with the version of the build side table at time `12:30:00` 
according to the [concept of temporal tables](temporal_tables.html).
-Thus, the incoming row is only joined with rows that have a timestamp lower or 
equal to `12:30:00` with
-applied updates according to the primary key until this point in time.
+通过定义事件时间(event time),[watermarks]({{ site.baseurl }}/zh/dev/event_time.html) 
允许 Join 运算不断向前滚动,丢弃不再需要的构建侧快照。因为不再需要时间戳更低或相等的记录。
 
-By definition of event time, [watermarks]({{ site.baseurl 
}}/dev/event_time.html) allow the join operation to move
-forward in time and discard versions of the build table that are no longer 
necessary because no incoming row with
-lower or equal timestamp is expected.
+
 
-Join with a Temporal Table
+时态表 Join
 --
 
-A join with a temporal table joins an arbitrary table (left input/probe side) 
with a temporal table (right input/build side),
-i.e., an external dimension table that changes over time. Please check the 
corresponding page for more information about [temporal 
tables](temporal_tables.html#temporal-table).
+时态表 Join 意味着对任意表(左输入/探针侧)和一个时态表(右输入/构建侧)执行的 Join 
操作,即随时间变化的的扩展表。请参考相应的页面以获取更多有关[时态表](temporal_tables.html#temporal-table)的信息。
 
-Attention Users can not use arbitrary 
tables as a temporal table, but need to use a table backed by a 
`LookupableTableSource`. A `LookupableTableSource` can only be used for 
temporal join as a temporal table. See the page for more details about [how to 
define 
LookupableTableSource](../sourceSinks.html#defining-a-tablesource-with-lookupable).
+注意 不是任何表都能用作时态表,能作为时态表的表必须实现接口 
`LookupableTableSource`。接口 `LookupableTableSource` 的实例只能作为时态表用于时态 Join 
。查看此页面获取更多关于[如何实现接口 
`LookupableTableSource`](../sourceSinks.html#defining-a-tablesource-with-lookupable)
 的详细内容。

Review comment:
   fixed





[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444610746



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -22,37 +22,38 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to 
connect the rows of two relations. However, the semantics of joins on [dynamic 
tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 
的语义会难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using 
either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in 
[Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl 
}}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ 
site.baseurl }}/zh/dev/table/sql/queries.html#joins) 中的 Join 章节。

Review comment:
   Fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-12855) Stagger TumblingProcessingTimeWindow processing to distribute workload

2020-06-23 Thread Teng Hu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143330#comment-17143330
 ] 

Teng Hu edited comment on FLINK-12855 at 6/24/20, 2:28 AM:
---

The idea came from our own practice in scaling up the  
[application|https://youtu.be/9U8ksIqrgLM], also inspired by this 
[blogpost|https://klaviyo.tech/flinkperf-c7bd28acc67].

Yes, I agree naming could be confusing since there seems to be no standard way 
of defining those windowing functions in the industry. I actually like the idea 
of creating a new window type, but I want to hear more from rest of the 
community what do you guys think.

The event time window change is 
[out|https://github.com/apache/flink/pull/12640] and more to come.

Thanks,
Niel


was (Author: tenghu):
The idea came from our own practice of [scaling up the 
application|https://youtu.be/9U8ksIqrgLM], also inspired by this 
[blogpost|https://klaviyo.tech/flinkperf-c7bd28acc67].

Yes, I agree naming could be confusing since there seems to be no standard way 
of defining those windowing functions in the industry. I actually like the idea 
of creating a new window type, but I want to hear more from rest of the 
community what do you guys think.

The event time window change is 
[out|https://github.com/apache/flink/pull/12640] and more to come.

Thanks,
Niel

> Stagger TumblingProcessingTimeWindow processing to distribute workload
> --
>
> Key: FLINK-12855
> URL: https://issues.apache.org/jira/browse/FLINK-12855
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: Teng Hu
>Assignee: Teng Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
> Attachments: stagger_window.png, stagger_window_delay.png, 
> stagger_window_throughput.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Flink natively triggers all panes belonging to same window at the same time. 
> In other words, all panes are aligned and their triggers all fire 
> simultaneously, causing the thundering herd effect.
> This new feature provides the option that panes could be staggered across 
> partitioned streams, so that their workloads are distributed.
> Attachment: proof of concept working



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18422) Update Prefer tag in documentation 'Fault Tolerance training lesson'

2020-06-23 Thread RocMarshal (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143470#comment-17143470
 ] 

RocMarshal commented on FLINK-18422:


Hi, [~alpinegizmo].

Could you assign this to me?

Thank you.

> Update Prefer tag in documentation 'Fault Tolerance training lesson'
> 
>
> Key: FLINK-18422
> URL: https://issues.apache.org/jira/browse/FLINK-18422
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.10.0, 1.10.1
>Reporter: RocMarshal
>Priority: Minor
>  Labels: document, easyfix
> Attachments: current_prefer_mode.png
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Update Prefer tag in documentation 'Fault Tolerance training lesson' 
> according to 
> [Prefer|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Reminder-Prefer-link-tag-in-documentation-td42362.html].
>   
> The location is: docs/learn-flink/fault_tolerance.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18422) Update Prefer tag in documentation 'Fault Tolerance training lesson'

2020-06-23 Thread RocMarshal (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RocMarshal updated FLINK-18422:
---
Attachment: current_prefer_mode.png

> Update Prefer tag in documentation 'Fault Tolerance training lesson'
> 
>
> Key: FLINK-18422
> URL: https://issues.apache.org/jira/browse/FLINK-18422
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.10.0, 1.10.1
>Reporter: RocMarshal
>Priority: Minor
>  Labels: document, easyfix
> Attachments: current_prefer_mode.png
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Update Prefer tag in documentation 'Fault Tolerance training lesson' 
> according to 
> [Prefer|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Reminder-Prefer-link-tag-in-documentation-td42362.html].
>   
> The location is: docs/learn-flink/fault_tolerance.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18422) Update Prefer tag in documentation 'Fault Tolerance training lesson'

2020-06-23 Thread RocMarshal (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RocMarshal updated FLINK-18422:
---
Attachment: (was: 截屏2020-06-24 10.19.56.png)

> Update Prefer tag in documentation 'Fault Tolerance training lesson'
> 
>
> Key: FLINK-18422
> URL: https://issues.apache.org/jira/browse/FLINK-18422
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.10.0, 1.10.1
>Reporter: RocMarshal
>Priority: Minor
>  Labels: document, easyfix
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Update Prefer tag in documentation 'Fault Tolerance training lesson' 
> according to 
> [Prefer|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Reminder-Prefer-link-tag-in-documentation-td42362.html].
>   
> The location is: docs/learn-flink/fault_tolerance.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18422) Update Prefer tag in documentation 'Fault Tolerance training lesson'

2020-06-23 Thread RocMarshal (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

RocMarshal updated FLINK-18422:
---
Attachment: 截屏2020-06-24 10.19.56.png

> Update Prefer tag in documentation 'Fault Tolerance training lesson'
> 
>
> Key: FLINK-18422
> URL: https://issues.apache.org/jira/browse/FLINK-18422
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation, Documentation / Training
>Affects Versions: 1.10.0, 1.10.1
>Reporter: RocMarshal
>Priority: Minor
>  Labels: document, easyfix
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Update Prefer tag in documentation 'Fault Tolerance training lesson' 
> according to 
> [Prefer|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Reminder-Prefer-link-tag-in-documentation-td42362.html].
>   
> The location is: docs/learn-flink/fault_tolerance.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18422) Update Prefer tag in documentation 'Fault Tolerance training lesson'

2020-06-23 Thread RocMarshal (Jira)
RocMarshal created FLINK-18422:
--

 Summary: Update Prefer tag in documentation 'Fault Tolerance 
training lesson'
 Key: FLINK-18422
 URL: https://issues.apache.org/jira/browse/FLINK-18422
 Project: Flink
  Issue Type: Improvement
  Components: Documentation, Documentation / Training
Affects Versions: 1.10.1, 1.10.0
Reporter: RocMarshal


Update Prefer tag in documentation 'Fault Tolerance training lesson' according 
to 
[Prefer|http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Reminder-Prefer-link-tag-in-documentation-td42362.html].
  

The location is: docs/learn-flink/fault_tolerance.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18421) Elasticsearch (v6.3.1) sink end-to-end test instable

2020-06-23 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143462#comment-17143462
 ] 

Dian Fu commented on FLINK-18421:
-

another instance on the master branch:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5

> Elasticsearch (v6.3.1) sink end-to-end test instable
> 
>
> Key: FLINK-18421
> URL: https://issues.apache.org/jira/browse/FLINK-18421
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a
> {code}
> 2020-06-23T22:28:10.5540446Z [FAIL] 'Elasticsearch (v6.3.1) sink end-to-end 
> test' failed after 0 minutes and 36 seconds! Test exited with exit code 0 but 
> the logs contained errors, exceptions or non-empty .out files
> {code}
> exceptions in the log:
> {code}
> (1/1) (69721d998f4b68253d1e59f7f9065def) switched from DEPLOYING to RUNNING.
> 2020-06-23T22:28:10.5206189Z 2020-06-23 22:28:05,844 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Source: 
> Sequence Source -> Flat Map -> Sink: Unnamed (1/1) 
> (69721d998f4b68253d1e59f7f9065def) switched from RUNNING to FINISHED.
> 2020-06-23T22:28:10.5207672Z 2020-06-23 22:28:05,852 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Job 
> Elasticsearch 6.x end to end sink test example 
> (59aa77d1ad30f5bedd2759ecfe2bb870) switched from state RUNNING to FINISHED.
> 2020-06-23T22:28:10.5208679Z 2020-06-23 22:28:05,852 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Stopping 
> checkpoint coordinator for job 59aa77d1ad30f5bedd2759ecfe2bb870.
> 2020-06-23T22:28:10.5209478Z 2020-06-23 22:28:05,852 INFO  
> org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore [] - 
> Shutting down
> 2020-06-23T22:28:10.5210355Z 2020-06-23 22:28:05,861 WARN  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Error 
> encountered during shutdown
> 2020-06-23T22:28:10.5211371Z java.util.concurrent.CompletionException: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@6a7696d6 
> rejected from 
> java.util.concurrent.ScheduledThreadPoolExecutor@3c0b9ab1[Shutting down, pool 
> size = 1, active threads = 1, queued tasks = 0, completed tasks = 1]
> 2020-06-23T22:28:10.5212310Z  at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
>  ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5212864Z  at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
>  ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5213404Z  at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:838) 
> ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5213939Z  at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
>  ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5214480Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5339896Z  at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575) 
> [?:1.8.0_252]
> 2020-06-23T22:28:10.5340629Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:594)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5341291Z  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5341799Z  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_252]
> 2020-06-23T22:28:10.5342271Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_252]
> 2020-06-23T22:28:10.5342825Z  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5343516Z  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5344122Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5344635Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5345088Z  at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_252]
> 2020-06-23T22:28:10.5345861Z Caused by: 
> java.util.concurrent.RejectedExecutionException: Task 
> 

[jira] [Comment Edited] (FLINK-18421) Elasticsearch (v6.3.1) sink end-to-end test instable

2020-06-23 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143462#comment-17143462
 ] 

Dian Fu edited comment on FLINK-18421 at 6/24/20, 2:13 AM:
---

Another instance on the master branch:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5


was (Author: dian.fu):
another instance on the master branch:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5

> Elasticsearch (v6.3.1) sink end-to-end test instable
> 
>
> Key: FLINK-18421
> URL: https://issues.apache.org/jira/browse/FLINK-18421
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a
> {code}
> 2020-06-23T22:28:10.5540446Z [FAIL] 'Elasticsearch (v6.3.1) sink end-to-end 
> test' failed after 0 minutes and 36 seconds! Test exited with exit code 0 but 
> the logs contained errors, exceptions or non-empty .out files
> {code}
> exceptions in the log:
> {code}
> (1/1) (69721d998f4b68253d1e59f7f9065def) switched from DEPLOYING to RUNNING.
> 2020-06-23T22:28:10.5206189Z 2020-06-23 22:28:05,844 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Source: 
> Sequence Source -> Flat Map -> Sink: Unnamed (1/1) 
> (69721d998f4b68253d1e59f7f9065def) switched from RUNNING to FINISHED.
> 2020-06-23T22:28:10.5207672Z 2020-06-23 22:28:05,852 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Job 
> Elasticsearch 6.x end to end sink test example 
> (59aa77d1ad30f5bedd2759ecfe2bb870) switched from state RUNNING to FINISHED.
> 2020-06-23T22:28:10.5208679Z 2020-06-23 22:28:05,852 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Stopping 
> checkpoint coordinator for job 59aa77d1ad30f5bedd2759ecfe2bb870.
> 2020-06-23T22:28:10.5209478Z 2020-06-23 22:28:05,852 INFO  
> org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore [] - 
> Shutting down
> 2020-06-23T22:28:10.5210355Z 2020-06-23 22:28:05,861 WARN  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Error 
> encountered during shutdown
> 2020-06-23T22:28:10.5211371Z java.util.concurrent.CompletionException: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@6a7696d6 
> rejected from 
> java.util.concurrent.ScheduledThreadPoolExecutor@3c0b9ab1[Shutting down, pool 
> size = 1, active threads = 1, queued tasks = 0, completed tasks = 1]
> 2020-06-23T22:28:10.5212310Z  at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
>  ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5212864Z  at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
>  ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5213404Z  at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:838) 
> ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5213939Z  at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
>  ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5214480Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5339896Z  at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575) 
> [?:1.8.0_252]
> 2020-06-23T22:28:10.5340629Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:594)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5341291Z  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5341799Z  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_252]
> 2020-06-23T22:28:10.5342271Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_252]
> 2020-06-23T22:28:10.5342825Z  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5343516Z  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5344122Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5344635Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  

[jira] [Comment Edited] (FLINK-18187) CheckPubSubEmulatorTest failed on azure

2020-06-23 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143460#comment-17143460
 ] 

Dian Fu edited comment on FLINK-18187 at 6/24/20, 2:11 AM:
---

Another instance on 1.11 branch:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3984=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee

https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3984=logs=08866332-78f7-59e4-4f7e-49a56faa3179


was (Author: dian.fu):
Another instance on 1.11 branch:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3984=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee

> CheckPubSubEmulatorTest failed on azure
> ---
>
> Key: FLINK-18187
> URL: https://issues.apache.org/jira/browse/FLINK-18187
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub, Tests
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Priority: Critical
> Fix For: 1.12.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2930=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2930=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
>  
>  
> {code:java}
> 2020-06-08T12:45:15.9874996Z 82609 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest [] - 
> Waiting a while to receive the m
>  essage...
> *2020-06-08T12:45:16.1955546Z 82816 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest [] - 
> Timeout during shutdown
> *2020-06-08T12:45:16.1956405Z java.util.concurrent.TimeoutException: Timed 
> out waiting for InnerService [STOPPING] to reach a terminal state. Current 
> state: ST*OPPING
> ...
> 2020-06-08T12:46:08.5914230Z 135213 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.emulator.GCloudEmulatorManager
>  [] -
>  2020-06-08T12:46:08.6054783Z [ERROR] Tests run: 2, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 54.754 s <<< FAILURE! - in 
> org.apache.flink.streaming.con nectors.gcp.pubsub.CheckPubSubEmulatorTest
>  2020-06-08T12:46:08.6062906Z [ERROR] 
> testPull(org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest)
>  Time elapsed: 52.123 s <<< FAILURE!
>  2020-06-08T12:46:08.6063659Z java.lang.AssertionError: expected:<1> but 
> was:<0>
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18187) CheckPubSubEmulatorTest failed on azure

2020-06-23 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143460#comment-17143460
 ] 

Dian Fu commented on FLINK-18187:
-

Another instance on 1.11 branch:
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3984=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c88eea3b-64a0-564d-0031-9fdcd7b8abee

> CheckPubSubEmulatorTest failed on azure
> ---
>
> Key: FLINK-18187
> URL: https://issues.apache.org/jira/browse/FLINK-18187
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub, Tests
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Priority: Critical
> Fix For: 1.12.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2930=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2930=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
>  
>  
> {code:java}
> 2020-06-08T12:45:15.9874996Z 82609 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest [] - 
> Waiting a while to receive the m
>  essage...
> *2020-06-08T12:45:16.1955546Z 82816 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest [] - 
> Timeout during shutdown
> *2020-06-08T12:45:16.1956405Z java.util.concurrent.TimeoutException: Timed 
> out waiting for InnerService [STOPPING] to reach a terminal state. Current 
> state: ST*OPPING
> ...
> 2020-06-08T12:46:08.5914230Z 135213 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.emulator.GCloudEmulatorManager
>  [] -
>  2020-06-08T12:46:08.6054783Z [ERROR] Tests run: 2, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 54.754 s <<< FAILURE! - in 
> org.apache.flink.streaming.con nectors.gcp.pubsub.CheckPubSubEmulatorTest
>  2020-06-08T12:46:08.6062906Z [ERROR] 
> testPull(org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest)
>  Time elapsed: 52.123 s <<< FAILURE!
>  2020-06-08T12:46:08.6063659Z java.lang.AssertionError: expected:<1> but 
> was:<0>
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18418) document example error

2020-06-23 Thread Aven Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143455#comment-17143455
 ] 

Aven Wu commented on FLINK-18418:
-

missing `,` at end of line

> document example error
> --
>
> Key: FLINK-18418
> URL: https://issues.apache.org/jira/browse/FLINK-18418
> Project: Flink
>  Issue Type: Bug
>Reporter: appleyuchi
>Priority: Major
>
> OuterJoin with Flat-Join Function
> [https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch/dataset_transformations.html]
> change
>  
> *public void join(Tuple2 movie, Rating rating*
> to
> *public void join(Tuple2 movie, Rating rating,*
>  
> please.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-18418) document example error

2020-06-23 Thread Aven Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aven Wu updated FLINK-18418:

Comment: was deleted

(was: missing `,` at end of line)

> document example error
> --
>
> Key: FLINK-18418
> URL: https://issues.apache.org/jira/browse/FLINK-18418
> Project: Flink
>  Issue Type: Bug
>Reporter: appleyuchi
>Priority: Major
>
> OuterJoin with Flat-Join Function
> [https://ci.apache.org/projects/flink/flink-docs-stable/dev/batch/dataset_transformations.html]
> change
>  
> *public void join(Tuple2 movie, Rating rating*
> to
> *public void join(Tuple2 movie, Rating rating,*
>  
> please.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18421) Elasticsearch (v6.3.1) sink end-to-end test instable

2020-06-23 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18421:

Labels: test-stability  (was: )

> Elasticsearch (v6.3.1) sink end-to-end test instable
> 
>
> Key: FLINK-18421
> URL: https://issues.apache.org/jira/browse/FLINK-18421
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a
> {code}
> 2020-06-23T22:28:10.5540446Z [FAIL] 'Elasticsearch (v6.3.1) sink end-to-end 
> test' failed after 0 minutes and 36 seconds! Test exited with exit code 0 but 
> the logs contained errors, exceptions or non-empty .out files
> {code}
> exceptions in the log:
> {code}
> (1/1) (69721d998f4b68253d1e59f7f9065def) switched from DEPLOYING to RUNNING.
> 2020-06-23T22:28:10.5206189Z 2020-06-23 22:28:05,844 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Source: 
> Sequence Source -> Flat Map -> Sink: Unnamed (1/1) 
> (69721d998f4b68253d1e59f7f9065def) switched from RUNNING to FINISHED.
> 2020-06-23T22:28:10.5207672Z 2020-06-23 22:28:05,852 INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Job 
> Elasticsearch 6.x end to end sink test example 
> (59aa77d1ad30f5bedd2759ecfe2bb870) switched from state RUNNING to FINISHED.
> 2020-06-23T22:28:10.5208679Z 2020-06-23 22:28:05,852 INFO  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Stopping 
> checkpoint coordinator for job 59aa77d1ad30f5bedd2759ecfe2bb870.
> 2020-06-23T22:28:10.5209478Z 2020-06-23 22:28:05,852 INFO  
> org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore [] - 
> Shutting down
> 2020-06-23T22:28:10.5210355Z 2020-06-23 22:28:05,861 WARN  
> org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Error 
> encountered during shutdown
> 2020-06-23T22:28:10.5211371Z java.util.concurrent.CompletionException: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@6a7696d6 
> rejected from 
> java.util.concurrent.ScheduledThreadPoolExecutor@3c0b9ab1[Shutting down, pool 
> size = 1, active threads = 1, queued tasks = 0, completed tasks = 1]
> 2020-06-23T22:28:10.5212310Z  at 
> java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
>  ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5212864Z  at 
> java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
>  ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5213404Z  at 
> java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:838) 
> ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5213939Z  at 
> java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
>  ~[?:1.8.0_252]
> 2020-06-23T22:28:10.5214480Z  at 
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5339896Z  at 
> java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575) 
> [?:1.8.0_252]
> 2020-06-23T22:28:10.5340629Z  at 
> java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:594)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5341291Z  at 
> java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5341799Z  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
> [?:1.8.0_252]
> 2020-06-23T22:28:10.5342271Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_252]
> 2020-06-23T22:28:10.5342825Z  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5343516Z  at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5344122Z  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5344635Z  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [?:1.8.0_252]
> 2020-06-23T22:28:10.5345088Z  at java.lang.Thread.run(Thread.java:748) 
> [?:1.8.0_252]
> 2020-06-23T22:28:10.5345861Z Caused by: 
> java.util.concurrent.RejectedExecutionException: Task 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@6a7696d6 
> rejected from 
> java.util.concurrent.ScheduledThreadPoolExecutor@3c0b9ab1[Shutting down, pool 
> size = 1, active threads = 1, queued tasks = 0, completed tasks = 1]
> 

[GitHub] [flink-web] liying919 commented on a change in pull request #345: [FLINK-17491] Translate Training page on project website

2020-06-23 Thread GitBox


liying919 commented on a change in pull request #345:
URL: https://github.com/apache/flink-web/pull/345#discussion_r444603628



##
File path: training.zh.md
##
@@ -58,49 +57,49 @@ This training covers the fundamentals of Flink, including:
 
 
 
- Streaming 
Analytics
+ 流式分析
 
 
 
-Event Time Processing
+事件时间处理
 Watermarks
-Windows
+窗口
 
 
 
 
 
 
 
- 
Event-driven Applications
+ 事件驱动的应用
 
 
 
-Process Functions
-Timers
-Side Outputs
+处理函数
+定时器
+旁路输出
 
 
 
 
 
 
 
- Fault 
Tolerance
+ 容错
 
 
 
-Checkpoints and Savepoints
-Exactly-once vs. At-least-once
-Exactly-once End-to-end
+Checkpoints 和 Savepoints
+精确一次与至少一次
+端到端的精确一次
 
 
 
 
 
 
 
-Apache Flink Training Course   
+Apache Flink 培训课程   

Review comment:
   @alpinegizmo Hi David, @klion26 is helping review this PR. We found the 
link _{{site.DOCS_BASE_URL}}flink-docs-master/training_ in training.md auto 
jumps to 
_https://ci.apache.org/projects/flink/flink-docs-master/learn-flink/index.html_ 
. Do you know where we can set the same jump config for Chinese version?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-18421) Elasticsearch (v6.3.1) sink end-to-end test instable

2020-06-23 Thread Dian Fu (Jira)
Dian Fu created FLINK-18421:
---

 Summary: Elasticsearch (v6.3.1) sink end-to-end test instable
 Key: FLINK-18421
 URL: https://issues.apache.org/jira/browse/FLINK-18421
 Project: Flink
  Issue Type: Bug
  Components: Connectors / ElasticSearch, Tests
Affects Versions: 1.12.0
Reporter: Dian Fu


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=91bf6583-3fb2-592f-e4d4-d79d79c3230a

{code}
2020-06-23T22:28:10.5540446Z [FAIL] 'Elasticsearch (v6.3.1) sink end-to-end 
test' failed after 0 minutes and 36 seconds! Test exited with exit code 0 but 
the logs contained errors, exceptions or non-empty .out files
{code}

exceptions in the log:
{code}
(1/1) (69721d998f4b68253d1e59f7f9065def) switched from DEPLOYING to RUNNING.
2020-06-23T22:28:10.5206189Z 2020-06-23 22:28:05,844 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Source: 
Sequence Source -> Flat Map -> Sink: Unnamed (1/1) 
(69721d998f4b68253d1e59f7f9065def) switched from RUNNING to FINISHED.
2020-06-23T22:28:10.5207672Z 2020-06-23 22:28:05,852 INFO  
org.apache.flink.runtime.executiongraph.ExecutionGraph   [] - Job 
Elasticsearch 6.x end to end sink test example 
(59aa77d1ad30f5bedd2759ecfe2bb870) switched from state RUNNING to FINISHED.
2020-06-23T22:28:10.5208679Z 2020-06-23 22:28:05,852 INFO  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Stopping 
checkpoint coordinator for job 59aa77d1ad30f5bedd2759ecfe2bb870.
2020-06-23T22:28:10.5209478Z 2020-06-23 22:28:05,852 INFO  
org.apache.flink.runtime.checkpoint.StandaloneCompletedCheckpointStore [] - 
Shutting down
2020-06-23T22:28:10.5210355Z 2020-06-23 22:28:05,861 WARN  
org.apache.flink.runtime.checkpoint.CheckpointCoordinator[] - Error 
encountered during shutdown
2020-06-23T22:28:10.5211371Z java.util.concurrent.CompletionException: 
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@6a7696d6 
rejected from 
java.util.concurrent.ScheduledThreadPoolExecutor@3c0b9ab1[Shutting down, pool 
size = 1, active threads = 1, queued tasks = 0, completed tasks = 1]
2020-06-23T22:28:10.5212310Zat 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:273)
 ~[?:1.8.0_252]
2020-06-23T22:28:10.5212864Zat 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:280)
 ~[?:1.8.0_252]
2020-06-23T22:28:10.5213404Zat 
java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:838) 
~[?:1.8.0_252]
2020-06-23T22:28:10.5213939Zat 
java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:811)
 ~[?:1.8.0_252]
2020-06-23T22:28:10.5214480Zat 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 
[?:1.8.0_252]
2020-06-23T22:28:10.5339896Zat 
java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575) 
[?:1.8.0_252]
2020-06-23T22:28:10.5340629Zat 
java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:594)
 [?:1.8.0_252]
2020-06-23T22:28:10.5341291Zat 
java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
 [?:1.8.0_252]
2020-06-23T22:28:10.5341799Zat 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_252]
2020-06-23T22:28:10.5342271Zat 
java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_252]
2020-06-23T22:28:10.5342825Zat 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [?:1.8.0_252]
2020-06-23T22:28:10.5343516Zat 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [?:1.8.0_252]
2020-06-23T22:28:10.5344122Zat 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_252]
2020-06-23T22:28:10.5344635Zat 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_252]
2020-06-23T22:28:10.5345088Zat java.lang.Thread.run(Thread.java:748) 
[?:1.8.0_252]
2020-06-23T22:28:10.5345861Z Caused by: 
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask@6a7696d6 
rejected from 
java.util.concurrent.ScheduledThreadPoolExecutor@3c0b9ab1[Shutting down, pool 
size = 1, active threads = 1, queued tasks = 0, completed tasks = 1]
2020-06-23T22:28:10.5346852Zat 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
 ~[?:1.8.0_252]
2020-06-23T22:28:10.5347483Zat 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830) 
~[?:1.8.0_252]
2020-06-23T22:28:10.5348048Zat 
java.util.concurrent.ScheduledThreadPoolExecutor.delayedExecute(ScheduledThreadPoolExecutor.java:326)
 

[jira] [Updated] (FLINK-18420) SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of arguments (0 for 1)"

2020-06-23 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18420:

Labels: test-stability  (was: )

> SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of 
> arguments (0 for 1)"
> ---
>
> Key: FLINK-18420
> URL: https://issues.apache.org/jira/browse/FLINK-18420
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=6caf31d6-847a-526e-9624-468e053467d6
> {code}
> [INFO] Running org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9393979Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 43.277 s <<< FAILURE! - in 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9397602Z [ERROR] 
> testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase)  Time 
> elapsed: 43.276 s  <<< ERROR!
> 2020-06-23T23:07:01.9398196Z java.io.IOException: 
> 2020-06-23T23:07:01.9399343Z Process execution failed due error. Error 
> output:OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
> 2020-06-23T23:07:01.9400131Z WARNING: An illegal reflective access operation 
> has occurred
> 2020-06-23T23:07:01.9401440Z WARNING: Illegal reflective access by 
> org.jruby.java.invokers.RubyToJavaInvoker 
> (file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar) to 
> method java.lang.Object.registerNatives()
> 2020-06-23T23:07:01.9402282Z WARNING: Please consider reporting this to the 
> maintainers of org.jruby.java.invokers.RubyToJavaInvoker
> 2020-06-23T23:07:01.9403191Z WARNING: Use --illegal-access=warn to enable 
> warnings of further illegal reflective access operations
> 2020-06-23T23:07:01.9403798Z WARNING: All illegal access operations will be 
> denied in a future release
> 2020-06-23T23:07:01.9404516Z ArgumentError: wrong number of arguments (0 for 
> 1)
> 2020-06-23T23:07:01.9405477Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:10
> 2020-06-23T23:07:01.9406654Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:129
> 2020-06-23T23:07:01.9407831ZPattern at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:2
> 2020-06-23T23:07:01.9408979Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:1
> 2020-06-23T23:07:01.9409598Zrequire at org/jruby/RubyKernel.java:1062
> 2020-06-23T23:07:01.9410469Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:42
> 2020-06-23T23:07:01.9411122Z (root) at 
> /tmp/junit3131433123777334326/hbase/bin/../bin/hirb.rb:38
> 2020-06-23T23:07:01.9411481Z 
> 2020-06-23T23:07:01.9411996Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:127)
> 2020-06-23T23:07:01.9412745Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
> 2020-06-23T23:07:01.9413515Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:188)
> 2020-06-23T23:07:01.9414502Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:179)
> 2020-06-23T23:07:01.9415198Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.createTable(LocalStandaloneHBaseResource.java:158)
> 2020-06-23T23:07:01.9415865Z  at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:117)
> 2020-06-23T23:07:01.9416428Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-06-23T23:07:01.9416990Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-06-23T23:07:01.9417635Z  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-06-23T23:07:01.9419058Z  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2020-06-23T23:07:01.9420497Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-06-23T23:07:01.9421198Z  

[jira] [Updated] (FLINK-18420) SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of arguments (0 for 1)"

2020-06-23 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18420:

Component/s: Tests

> SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of 
> arguments (0 for 1)"
> ---
>
> Key: FLINK-18420
> URL: https://issues.apache.org/jira/browse/FLINK-18420
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / HBase, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=6caf31d6-847a-526e-9624-468e053467d6
> {code}
> [INFO] Running org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9393979Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
> Skipped: 0, Time elapsed: 43.277 s <<< FAILURE! - in 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
> 2020-06-23T23:07:01.9397602Z [ERROR] 
> testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase)  Time 
> elapsed: 43.276 s  <<< ERROR!
> 2020-06-23T23:07:01.9398196Z java.io.IOException: 
> 2020-06-23T23:07:01.9399343Z Process execution failed due error. Error 
> output:OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
> deprecated in version 9.0 and will likely be removed in a future release.
> 2020-06-23T23:07:01.9400131Z WARNING: An illegal reflective access operation 
> has occurred
> 2020-06-23T23:07:01.9401440Z WARNING: Illegal reflective access by 
> org.jruby.java.invokers.RubyToJavaInvoker 
> (file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar) to 
> method java.lang.Object.registerNatives()
> 2020-06-23T23:07:01.9402282Z WARNING: Please consider reporting this to the 
> maintainers of org.jruby.java.invokers.RubyToJavaInvoker
> 2020-06-23T23:07:01.9403191Z WARNING: Use --illegal-access=warn to enable 
> warnings of further illegal reflective access operations
> 2020-06-23T23:07:01.9403798Z WARNING: All illegal access operations will be 
> denied in a future release
> 2020-06-23T23:07:01.9404516Z ArgumentError: wrong number of arguments (0 for 
> 1)
> 2020-06-23T23:07:01.9405477Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:10
> 2020-06-23T23:07:01.9406654Z   method_added at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:129
> 2020-06-23T23:07:01.9407831ZPattern at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:2
> 2020-06-23T23:07:01.9408979Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:1
> 2020-06-23T23:07:01.9409598Zrequire at org/jruby/RubyKernel.java:1062
> 2020-06-23T23:07:01.9410469Z (root) at 
> file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:42
> 2020-06-23T23:07:01.9411122Z (root) at 
> /tmp/junit3131433123777334326/hbase/bin/../bin/hirb.rb:38
> 2020-06-23T23:07:01.9411481Z 
> 2020-06-23T23:07:01.9411996Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:127)
> 2020-06-23T23:07:01.9412745Z  at 
> org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
> 2020-06-23T23:07:01.9413515Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:188)
> 2020-06-23T23:07:01.9414502Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:179)
> 2020-06-23T23:07:01.9415198Z  at 
> org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.createTable(LocalStandaloneHBaseResource.java:158)
> 2020-06-23T23:07:01.9415865Z  at 
> org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:117)
> 2020-06-23T23:07:01.9416428Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-06-23T23:07:01.9416990Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-06-23T23:07:01.9417635Z  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-06-23T23:07:01.9419058Z  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2020-06-23T23:07:01.9420497Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-06-23T23:07:01.9421198Z  at 
> 

[jira] [Created] (FLINK-18420) SQLClientHBaseITCase.testHBase failed with "ArgumentError: wrong number of arguments (0 for 1)"

2020-06-23 Thread Dian Fu (Jira)
Dian Fu created FLINK-18420:
---

 Summary: SQLClientHBaseITCase.testHBase failed with 
"ArgumentError: wrong number of arguments (0 for 1)"
 Key: FLINK-18420
 URL: https://issues.apache.org/jira/browse/FLINK-18420
 Project: Flink
  Issue Type: Bug
  Components: Connectors / HBase
Affects Versions: 1.12.0
Reporter: Dian Fu


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3983=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=6caf31d6-847a-526e-9624-468e053467d6

{code}
[INFO] Running org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
2020-06-23T23:07:01.9393979Z [ERROR] Tests run: 1, Failures: 0, Errors: 1, 
Skipped: 0, Time elapsed: 43.277 s <<< FAILURE! - in 
org.apache.flink.tests.util.hbase.SQLClientHBaseITCase
2020-06-23T23:07:01.9397602Z [ERROR] 
testHBase(org.apache.flink.tests.util.hbase.SQLClientHBaseITCase)  Time 
elapsed: 43.276 s  <<< ERROR!
2020-06-23T23:07:01.9398196Z java.io.IOException: 
2020-06-23T23:07:01.9399343Z Process execution failed due error. Error 
output:OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was 
deprecated in version 9.0 and will likely be removed in a future release.
2020-06-23T23:07:01.9400131Z WARNING: An illegal reflective access operation 
has occurred
2020-06-23T23:07:01.9401440Z WARNING: Illegal reflective access by 
org.jruby.java.invokers.RubyToJavaInvoker 
(file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar) to 
method java.lang.Object.registerNatives()
2020-06-23T23:07:01.9402282Z WARNING: Please consider reporting this to the 
maintainers of org.jruby.java.invokers.RubyToJavaInvoker
2020-06-23T23:07:01.9403191Z WARNING: Use --illegal-access=warn to enable 
warnings of further illegal reflective access operations
2020-06-23T23:07:01.9403798Z WARNING: All illegal access operations will be 
denied in a future release
2020-06-23T23:07:01.9404516Z ArgumentError: wrong number of arguments (0 for 1)
2020-06-23T23:07:01.9405477Z   method_added at 
file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:10
2020-06-23T23:07:01.9406654Z   method_added at 
file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/javasupport/core_ext/object.rb:129
2020-06-23T23:07:01.9407831ZPattern at 
file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:2
2020-06-23T23:07:01.9408979Z (root) at 
file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:1
2020-06-23T23:07:01.9409598Zrequire at org/jruby/RubyKernel.java:1062
2020-06-23T23:07:01.9410469Z (root) at 
file:/tmp/junit3131433123777334326/hbase/lib/jruby-complete-1.6.8.jar!/builtin/java/java.util.regex.rb:42
2020-06-23T23:07:01.9411122Z (root) at 
/tmp/junit3131433123777334326/hbase/bin/../bin/hirb.rb:38
2020-06-23T23:07:01.9411481Z 
2020-06-23T23:07:01.9411996Zat 
org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:127)
2020-06-23T23:07:01.9412745Zat 
org.apache.flink.tests.util.AutoClosableProcess$AutoClosableProcessBuilder.runBlocking(AutoClosableProcess.java:108)
2020-06-23T23:07:01.9413515Zat 
org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:188)
2020-06-23T23:07:01.9414502Zat 
org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.executeHBaseShell(LocalStandaloneHBaseResource.java:179)
2020-06-23T23:07:01.9415198Zat 
org.apache.flink.tests.util.hbase.LocalStandaloneHBaseResource.createTable(LocalStandaloneHBaseResource.java:158)
2020-06-23T23:07:01.9415865Zat 
org.apache.flink.tests.util.hbase.SQLClientHBaseITCase.testHBase(SQLClientHBaseITCase.java:117)
2020-06-23T23:07:01.9416428Zat 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-06-23T23:07:01.9416990Zat 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-06-23T23:07:01.9417635Zat 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-06-23T23:07:01.9419058Zat 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
2020-06-23T23:07:01.9420497Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-06-23T23:07:01.9421198Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-06-23T23:07:01.9421729Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-06-23T23:07:01.940Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-06-23T23:07:01.9422716Zat 

[GitHub] [flink] RocMarshal commented on pull request #12727: [FLINK-17292][docs] Translate Fault Tolerance training lesson to Chinese

2020-06-23 Thread GitBox


RocMarshal commented on pull request #12727:
URL: https://github.com/apache/flink/pull/12727#issuecomment-648533794


   Hi, @klion26 ,
   I have made some changes based on your suggestions, which is very helpful 
for improvement.
   Thank you so much.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444595912



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -189,50 +183,42 @@ val result = orders
 
 
 
-**Note**: State retention defined in a [query 
configuration](query_configuration.html) is not yet implemented for temporal 
joins.
-This means that the required state to compute the query result might grow 
infinitely depending on the number of distinct primary keys for the history 
table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 
中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time 
attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a 
processing-time temporal table function will always return the latest known 
versions of the underlying table
-and any updates in the underlying history table will also immediately 
overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。

Review comment:
   已修改





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17912) KafkaShuffleITCase.testAssignedToPartitionEventTime: "Watermark should always increase"

2020-06-23 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143436#comment-17143436
 ] 

Dian Fu commented on FLINK-17912:
-

KafkaShuffleITCase.testSimpleEventTime also failed with the same error message 
on the master branch: "Watermark should always increase: current"
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3972=logs=ae4f8708-9994-57d3-c2d7-b892156e7812=c5f0071e-1851-543e-9a45-9ac140befc32

> KafkaShuffleITCase.testAssignedToPartitionEventTime: "Watermark should always 
> increase"
> ---
>
> Key: FLINK-17912
> URL: https://issues.apache.org/jira/browse/FLINK-17912
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2062=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=0d9ad4c1-5629-5ffc-10dc-113ca91e23c5
> {code}
> 2020-05-22T21:16:24.7188044Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-22T21:16:24.7188796Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-05-22T21:16:24.7189596Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:677)
> 2020-05-22T21:16:24.7190352Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:81)
> 2020-05-22T21:16:24.7191261Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1673)
> 2020-05-22T21:16:24.7191824Z  at 
> org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:35)
> 2020-05-22T21:16:24.7192325Z  at 
> org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testAssignedToPartition(KafkaShuffleITCase.java:296)
> 2020-05-22T21:16:24.7192962Z  at 
> org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testAssignedToPartitionEventTime(KafkaShuffleITCase.java:126)
> 2020-05-22T21:16:24.7193436Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-22T21:16:24.7193999Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-22T21:16:24.7194720Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-22T21:16:24.7195226Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-22T21:16:24.7195864Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-22T21:16:24.7196574Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-22T21:16:24.7197511Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-22T21:16:24.7198020Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-22T21:16:24.7198494Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-22T21:16:24.7199128Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-05-22T21:16:24.7199689Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-05-22T21:16:24.7200308Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-05-22T21:16:24.7200645Z  at java.lang.Thread.run(Thread.java:748)
> 2020-05-22T21:16:24.7201029Z Caused by: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-05-22T21:16:24.7201643Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
> 2020-05-22T21:16:24.7202275Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
> 2020-05-22T21:16:24.7202863Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
> 2020-05-22T21:16:24.7203525Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
> 2020-05-22T21:16:24.7204072Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
> 2020-05-22T21:16:24.7204618Z  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
> 2020-05-22T21:16:24.7205255Z  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
> 2020-05-22T21:16:24.7205716Z  at 
> 

[jira] [Updated] (FLINK-17912) KafkaShuffleITCase.testAssignedToPartitionEventTime: "Watermark should always increase"

2020-06-23 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17912:

Affects Version/s: 1.12.0

> KafkaShuffleITCase.testAssignedToPartitionEventTime: "Watermark should always 
> increase"
> ---
>
> Key: FLINK-17912
> URL: https://issues.apache.org/jira/browse/FLINK-17912
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2062=logs=1fc6e7bf-633c-5081-c32a-9dea24b05730=0d9ad4c1-5629-5ffc-10dc-113ca91e23c5
> {code}
> 2020-05-22T21:16:24.7188044Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-05-22T21:16:24.7188796Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-05-22T21:16:24.7189596Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:677)
> 2020-05-22T21:16:24.7190352Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:81)
> 2020-05-22T21:16:24.7191261Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1673)
> 2020-05-22T21:16:24.7191824Z  at 
> org.apache.flink.test.util.TestUtils.tryExecute(TestUtils.java:35)
> 2020-05-22T21:16:24.7192325Z  at 
> org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testAssignedToPartition(KafkaShuffleITCase.java:296)
> 2020-05-22T21:16:24.7192962Z  at 
> org.apache.flink.streaming.connectors.kafka.shuffle.KafkaShuffleITCase.testAssignedToPartitionEventTime(KafkaShuffleITCase.java:126)
> 2020-05-22T21:16:24.7193436Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-05-22T21:16:24.7193999Z  at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-05-22T21:16:24.7194720Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-22T21:16:24.7195226Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-22T21:16:24.7195864Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-05-22T21:16:24.7196574Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-05-22T21:16:24.7197511Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-05-22T21:16:24.7198020Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-05-22T21:16:24.7198494Z  at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
> 2020-05-22T21:16:24.7199128Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-05-22T21:16:24.7199689Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-05-22T21:16:24.7200308Z  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
> 2020-05-22T21:16:24.7200645Z  at java.lang.Thread.run(Thread.java:748)
> 2020-05-22T21:16:24.7201029Z Caused by: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-05-22T21:16:24.7201643Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
> 2020-05-22T21:16:24.7202275Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
> 2020-05-22T21:16:24.7202863Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
> 2020-05-22T21:16:24.7203525Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
> 2020-05-22T21:16:24.7204072Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
> 2020-05-22T21:16:24.7204618Z  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:503)
> 2020-05-22T21:16:24.7205255Z  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
> 2020-05-22T21:16:24.7205716Z  at 
> sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
> 2020-05-22T21:16:24.7206191Z  at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-05-22T21:16:24.7206585Z  at 
> java.lang.reflect.Method.invoke(Method.java:498)
> 2020-05-22T21:16:24.7207261Z  

[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444591826



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -189,50 +183,42 @@ val result = orders
 
 
 
-**Note**: State retention defined in a [query 
configuration](query_configuration.html) is not yet implemented for temporal 
joins.
-This means that the required state to compute the query result might grow 
infinitely depending on the number of distinct primary keys for the history 
table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 
中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time 
attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a 
processing-time temporal table function will always return the latest known 
versions of the underlying table
-and any updates in the underlying history table will also immediately 
overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 
的临时表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-Only the latest versions (with respect to the defined primary key) of the 
build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join 
results.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-One can think about a processing-time temporal join as a simple `HashMap` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous 
record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most 
recent/current state of the `HashMap`.
+可以将 processing-time 的临时 Join 视作简单的哈希Map `HashMap `,HashMap 中存储来自构建侧的所有记录。

Review comment:
   Fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444591702



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -189,50 +183,42 @@ val result = orders
 
 
 
-**Note**: State retention defined in a [query 
configuration](query_configuration.html) is not yet implemented for temporal 
joins.
-This means that the required state to compute the query result might grow 
infinitely depending on the number of distinct primary keys for the history 
table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 
中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。

Review comment:
   fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444591256



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to 
connect the rows of two relations. However, the semantics of joins on [dynamic 
tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 
的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using 
either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in 
[Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl 
}}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ 
site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+常规 Join
 -
 
-Regular joins are the most generic type of join in which any new records or 
changes to either side of the join input are visible and are affecting the 
whole join result.
-For example, if there is a new record on the left side, it will be joined with 
all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧的表的任何更改都是可见的,并会影响最终整个 Join 
的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录一起合并查询。
 
 {% highlight sql %}
 SELECT * FROM Orders
 INNER JOIN Product
 ON Orders.productId = Product.id
 {% endhighlight %}
 
-These semantics allow for any kind of updating (insert, update, delete) input 
tables.
+上述语意允许对输入表进行任意类型的更新操作(insert, update, delete)。
 
-However, this operation has an important implication: it requires to keep both 
sides of the join input in Flink's state forever.
-Thus, the resource usage will grow indefinitely as well, if one or both input 
tables are continuously growing.
+然而,常规 Join 隐含了一个重要的前提:即它需要在 Flink 的状态中永久保存 Join 两侧的数据。
+因而,如果 Join 操作中的一方或双方输入表持续增长的话,资源消耗也将会随之无限增长。
 
-Time-windowed Joins
+时间窗口 Join

Review comment:
   
https://github.com/apache/flink/blob/b6e2f9fb178649c305eb0881be57a46f9ce9911a/docs/dev/table/sql/queries.zh.md
   
   由于 `Time-windowed` 已被重命名为 `Interval 
Joins`,参考[此处](https://github.com/apache/flink/blob/b6e2f9fb178649c305eb0881be57a46f9ce9911a/docs/dev/table/sql/queries.zh.md)的翻译为“时间区间
 Join”





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444591375



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build 
side table at the time of the correlated time attribute of the probe side 
record.
-In order to support updates (overwrites) of previous values on the build side 
table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of 
`Rates` at time `o.rowtime`. The `currency` field has been defined as the 
primary key of `Rates` before and is used to connect both tables in our 
example. If the query were using a processing-time notion, a newly appended 
order would always be joined with the most recent version of `Rates` when 
executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 
字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 
processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a 
new record on the build side, it will not affect the previous results of the 
join.
-This again allows Flink to limit the number of elements that must be kept in 
the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 
能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
 
-Compared to [time-windowed joins](#time-windowed-joins), temporal table joins 
do not define a time window within which bounds the records will be joined.
-Records from the probe side are always joined with the build side's version at 
the time specified by the time attribute. Thus, records on the build side might 
be arbitrarily old.
-As time passes, the previous and no longer needed versions of the record (for 
the given primary key) will be removed from the state.
+与[时间窗口 Join](#time-windowed-joins) 相比,临时表 Join 没有定义限制了每次参与 Join 
运算的元素的时间范围。探针侧的记录总是会和构建侧中对应特定时间属性的数据进行 Join 
操作。因而在构建侧的记录可以是任意时间之前的。随着时间流动,之前产生的不再需要的记录(已给定了主键)将从 state 中移除。
 
-Such behaviour makes a temporal table join a good candidate to express stream 
enrichment in relational terms.
+这种做法让临时表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-### Usage
+### 用法
 
-After [defining temporal table 
function](temporal_tables.html#defining-temporal-table-function), we can start 
using it.
-Temporal table functions can be used in the same way as normal table functions 
would be used.
+在 [定义临时表函数](temporal_tables.html#defining-temporal-table-function) 之后就可以使用了。
+临时表函数可以和普通表函数一样使用。

Review comment:
   Fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18371) NPE of "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"

2020-06-23 Thread shaokan cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143432#comment-17143432
 ] 

shaokan cao edited comment on FLINK-18371 at 6/24/20, 1:12 AM:
---

[~Leonard Xu] [~libenchao] [~jark]  [~zjwang] [~xiaojin.wy] ,I test this case 
in  release-1.11.0-rc2 and 1.10(master),and the error above did not appear.
{code:java}
CREATE TABLE `src` ( 
key bigint, v varchar 
) WITH ( 
'connector'='filesystem', 
'csv.field-delimiter'='|', 
'path'='/Users/r/fdata.csv', 
'csv.null-literal'='', 
'format'='csv' );


select
cast(key as decimal(10,2)) as c1,
cast(key as char(10)) as c2,
cast(key as varchar(10)) as c3
from src
order by c1, c2, c3
limit 1;

//result

"0E-18"|"0"|"0"

{code}


was (Author: caoshaokan):
[~Leonard Xu] [~libenchao] [~jark]  [~zjwang]  ,I test this case in  
release-1.11.0-rc2 and 1.10(master),and the error above did not appear.
{code:java}
CREATE TABLE `src` ( 
key bigint, v varchar 
) WITH ( 
'connector'='filesystem', 
'csv.field-delimiter'='|', 
'path'='/Users/r/fdata.csv', 
'csv.null-literal'='', 
'format'='csv' );


select
cast(key as decimal(10,2)) as c1,
cast(key as char(10)) as c2,
cast(key as varchar(10)) as c3
from src
order by c1, c2, c3
limit 1;

//result

"0E-18"|"0"|"0"

{code}

> NPE of 
> "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"
> 
>
> Key: FLINK-18371
> URL: https://issues.apache.org/jira/browse/FLINK-18371
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: I use the sql-gateway to run this sql.
> The environment is streaming.
> *The sql is:*
> CREATE TABLE `src` (
>   key bigint,
>   v varchar
> ) WITH (
>   'connector'='filesystem',
>   'csv.field-delimiter'='|',
>   
> 'path'='/defender_test_data/daily_regression_stream_hive_1.10/test_cast/sources/src.csv',
>   'csv.null-literal'='',
>   'format'='csv'
> )
> select
> cast(key as decimal(10,2)) as c1,
> cast(key as char(10)) as c2,
> cast(key as varchar(10)) as c3
> from src
> order by c1, c2, c3
> limit 1
> *The input data is:*
> 238|val_238
> 86|val_86
> 311|val_311
> 27|val_27
> 165|val_165
> 409|val_409
> 255|val_255
> 278|val_278
> 98|val_98
> 484|val_484
> 265|val_265
> 193|val_193
> 401|val_401
> 150|val_150
> 273|val_273
> 224|val_224
> 369|val_369
> 66|val_66
> 128|val_128
> 213|val_213
> 146|val_146
> 406|val_406
> 429|val_429
> 374|val_374
> 152|val_152
> 469|val_469
> 145|val_145
> 495|val_495
> 37|val_37
> 327|val_327
> 281|val_281
> 277|val_277
> 209|val_209
> 15|val_15
> 82|val_82
> 403|val_403
> 166|val_166
> 417|val_417
> 430|val_430
> 252|val_252
> 292|val_292
> 219|val_219
> 287|val_287
> 153|val_153
> 193|val_193
> 338|val_338
> 446|val_446
> 459|val_459
> 394|val_394
> 237|val_237
> 482|val_482
> 174|val_174
> 413|val_413
> 494|val_494
> 207|val_207
> 199|val_199
> 466|val_466
> 208|val_208
> 174|val_174
> 399|val_399
> 396|val_396
> 247|val_247
> 417|val_417
> 489|val_489
> 162|val_162
> 377|val_377
> 397|val_397
> 309|val_309
> 365|val_365
> 266|val_266
> 439|val_439
> 342|val_342
> 367|val_367
> 325|val_325
> 167|val_167
> 195|val_195
> 475|val_475
> 17|val_17
> 113|val_113
> 155|val_155
> 203|val_203
> 339|val_339
> 0|val_0
> 455|val_455
> 128|val_128
> 311|val_311
> 316|val_316
> 57|val_57
> 302|val_302
> 205|val_205
> 149|val_149
> 438|val_438
> 345|val_345
> 129|val_129
> 170|val_170
> 20|val_20
> 489|val_489
> 157|val_157
> 378|val_378
> 221|val_221
> 92|val_92
> 111|val_111
> 47|val_47
> 72|val_72
> 4|val_4
> 280|val_280
> 35|val_35
> 427|val_427
> 277|val_277
> 208|val_208
> 356|val_356
> 399|val_399
> 169|val_169
> 382|val_382
> 498|val_498
> 125|val_125
> 386|val_386
> 437|val_437
> 469|val_469
> 192|val_192
> 286|val_286
> 187|val_187
> 176|val_176
> 54|val_54
> 459|val_459
> 51|val_51
> 138|val_138
> 103|val_103
> 239|val_239
> 213|val_213
> 216|val_216
> 430|val_430
> 278|val_278
> 176|val_176
> 289|val_289
> 221|val_221
> 65|val_65
> 318|val_318
> 332|val_332
> 311|val_311
> 275|val_275
> 137|val_137
> 241|val_241
> 83|val_83
> 333|val_333
> 180|val_180
> 284|val_284
> 12|val_12
> 230|val_230
> 181|val_181
> 67|val_67
> 260|val_260
> 404|val_404
> 384|val_384
> 489|val_489
> 353|val_353
> 373|val_373
> 272|val_272
> 138|val_138
> 217|val_217
> 84|val_84
> 348|val_348
> 466|val_466
> 58|val_58
> 8|val_8
> 411|val_411
> 230|val_230
> 208|val_208
> 348|val_348
> 24|val_24
> 463|val_463
> 431|val_431
> 179|val_179
> 172|val_172
> 42|val_42
> 129|val_129
> 158|val_158
> 119|val_119
> 496|val_496
> 0|val_0
> 

[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444589987



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build 
side table at the time of the correlated time attribute of the probe side 
record.
-In order to support updates (overwrites) of previous values on the build side 
table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of 
`Rates` at time `o.rowtime`. The `currency` field has been defined as the 
primary key of `Rates` before and is used to connect both tables in our 
example. If the query were using a processing-time notion, a newly appended 
order would always be joined with the most recent version of `Rates` when 
executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 
字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 
processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a 
new record on the build side, it will not affect the previous results of the 
join.
-This again allows Flink to limit the number of elements that must be kept in 
the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 
能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
 
-Compared to [time-windowed joins](#time-windowed-joins), temporal table joins 
do not define a time window within which bounds the records will be joined.
-Records from the probe side are always joined with the build side's version at 
the time specified by the time attribute. Thus, records on the build side might 
be arbitrarily old.
-As time passes, the previous and no longer needed versions of the record (for 
the given primary key) will be removed from the state.
+与[时间窗口 Join](#time-windowed-joins) 相比,临时表 Join 没有定义限制了每次参与 Join 
运算的元素的时间范围。探针侧的记录总是会和构建侧中对应特定时间属性的数据进行 Join 
操作。因而在构建侧的记录可以是任意时间之前的。随着时间流动,之前产生的不再需要的记录(已给定了主键)将从 state 中移除。
 
-Such behaviour makes a temporal table join a good candidate to express stream 
enrichment in relational terms.
+这种做法让临时表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-### Usage
+### 用法
 
-After [defining temporal table 
function](temporal_tables.html#defining-temporal-table-function), we can start 
using it.
-Temporal table functions can be used in the same way as normal table functions 
would be used.
+在 [定义临时表函数](temporal_tables.html#defining-temporal-table-function) 之后就可以使用了。

Review comment:
   Fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18371) NPE of "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"

2020-06-23 Thread shaokan cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143432#comment-17143432
 ] 

shaokan cao edited comment on FLINK-18371 at 6/24/20, 1:11 AM:
---

[~Leonard Xu] [~libenchao] [~jark]  [~zjwang]  ,I test this case in  
release-1.11.0-rc2 and 1.10(master),and the error above did not appear.
{code:java}
CREATE TABLE `src` ( 
key bigint, v varchar 
) WITH ( 
'connector'='filesystem', 
'csv.field-delimiter'='|', 
'path'='/Users/r/fdata.csv', 
'csv.null-literal'='', 
'format'='csv' );


select
cast(key as decimal(10,2)) as c1,
cast(key as char(10)) as c2,
cast(key as varchar(10)) as c3
from src
order by c1, c2, c3
limit 1;

//result

"0E-18"|"0"|"0"

{code}


was (Author: caoshaokan):
[~Leonard Xu] [~libenchao] ,I test this case in  release-1.11.0-rc2 and 
1.10(master),and the error above did not appear.
{code:java}
CREATE TABLE `src` ( 
key bigint, v varchar 
) WITH ( 
'connector'='filesystem', 
'csv.field-delimiter'='|', 
'path'='/Users/r/fdata.csv', 
'csv.null-literal'='', 
'format'='csv' );


select
cast(key as decimal(10,2)) as c1,
cast(key as char(10)) as c2,
cast(key as varchar(10)) as c3
from src
order by c1, c2, c3
limit 1;

//result

"0E-18"|"0"|"0"

{code}

> NPE of 
> "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"
> 
>
> Key: FLINK-18371
> URL: https://issues.apache.org/jira/browse/FLINK-18371
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: I use the sql-gateway to run this sql.
> The environment is streaming.
> *The sql is:*
> CREATE TABLE `src` (
>   key bigint,
>   v varchar
> ) WITH (
>   'connector'='filesystem',
>   'csv.field-delimiter'='|',
>   
> 'path'='/defender_test_data/daily_regression_stream_hive_1.10/test_cast/sources/src.csv',
>   'csv.null-literal'='',
>   'format'='csv'
> )
> select
> cast(key as decimal(10,2)) as c1,
> cast(key as char(10)) as c2,
> cast(key as varchar(10)) as c3
> from src
> order by c1, c2, c3
> limit 1
> *The input data is:*
> 238|val_238
> 86|val_86
> 311|val_311
> 27|val_27
> 165|val_165
> 409|val_409
> 255|val_255
> 278|val_278
> 98|val_98
> 484|val_484
> 265|val_265
> 193|val_193
> 401|val_401
> 150|val_150
> 273|val_273
> 224|val_224
> 369|val_369
> 66|val_66
> 128|val_128
> 213|val_213
> 146|val_146
> 406|val_406
> 429|val_429
> 374|val_374
> 152|val_152
> 469|val_469
> 145|val_145
> 495|val_495
> 37|val_37
> 327|val_327
> 281|val_281
> 277|val_277
> 209|val_209
> 15|val_15
> 82|val_82
> 403|val_403
> 166|val_166
> 417|val_417
> 430|val_430
> 252|val_252
> 292|val_292
> 219|val_219
> 287|val_287
> 153|val_153
> 193|val_193
> 338|val_338
> 446|val_446
> 459|val_459
> 394|val_394
> 237|val_237
> 482|val_482
> 174|val_174
> 413|val_413
> 494|val_494
> 207|val_207
> 199|val_199
> 466|val_466
> 208|val_208
> 174|val_174
> 399|val_399
> 396|val_396
> 247|val_247
> 417|val_417
> 489|val_489
> 162|val_162
> 377|val_377
> 397|val_397
> 309|val_309
> 365|val_365
> 266|val_266
> 439|val_439
> 342|val_342
> 367|val_367
> 325|val_325
> 167|val_167
> 195|val_195
> 475|val_475
> 17|val_17
> 113|val_113
> 155|val_155
> 203|val_203
> 339|val_339
> 0|val_0
> 455|val_455
> 128|val_128
> 311|val_311
> 316|val_316
> 57|val_57
> 302|val_302
> 205|val_205
> 149|val_149
> 438|val_438
> 345|val_345
> 129|val_129
> 170|val_170
> 20|val_20
> 489|val_489
> 157|val_157
> 378|val_378
> 221|val_221
> 92|val_92
> 111|val_111
> 47|val_47
> 72|val_72
> 4|val_4
> 280|val_280
> 35|val_35
> 427|val_427
> 277|val_277
> 208|val_208
> 356|val_356
> 399|val_399
> 169|val_169
> 382|val_382
> 498|val_498
> 125|val_125
> 386|val_386
> 437|val_437
> 469|val_469
> 192|val_192
> 286|val_286
> 187|val_187
> 176|val_176
> 54|val_54
> 459|val_459
> 51|val_51
> 138|val_138
> 103|val_103
> 239|val_239
> 213|val_213
> 216|val_216
> 430|val_430
> 278|val_278
> 176|val_176
> 289|val_289
> 221|val_221
> 65|val_65
> 318|val_318
> 332|val_332
> 311|val_311
> 275|val_275
> 137|val_137
> 241|val_241
> 83|val_83
> 333|val_333
> 180|val_180
> 284|val_284
> 12|val_12
> 230|val_230
> 181|val_181
> 67|val_67
> 260|val_260
> 404|val_404
> 384|val_384
> 489|val_489
> 353|val_353
> 373|val_373
> 272|val_272
> 138|val_138
> 217|val_217
> 84|val_84
> 348|val_348
> 466|val_466
> 58|val_58
> 8|val_8
> 411|val_411
> 230|val_230
> 208|val_208
> 348|val_348
> 24|val_24
> 463|val_463
> 431|val_431
> 179|val_179
> 172|val_172
> 42|val_42
> 129|val_129
> 158|val_158
> 119|val_119
> 496|val_496
> 0|val_0
> 322|val_322
> 197|val_197
> 468|val_468
> 

[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444589792



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build 
side table at the time of the correlated time attribute of the probe side 
record.
-In order to support updates (overwrites) of previous values on the build side 
table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of 
`Rates` at time `o.rowtime`. The `currency` field has been defined as the 
primary key of `Rates` before and is used to connect both tables in our 
example. If the query were using a processing-time notion, a newly appended 
order would always be joined with the most recent version of `Rates` when 
executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 
字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 
processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a 
new record on the build side, it will not affect the previous results of the 
join.
-This again allows Flink to limit the number of elements that must be kept in 
the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 
能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。

Review comment:
   Fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18371) NPE of "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"

2020-06-23 Thread shaokan cao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18371?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143432#comment-17143432
 ] 

shaokan cao commented on FLINK-18371:
-

[~Leonard Xu] [~libenchao] ,I test this case in  release-1.11.0-rc2 and 
1.10(master),and the error above did not appear.
{code:java}
CREATE TABLE `src` ( 
key bigint, v varchar 
) WITH ( 
'connector'='filesystem', 
'csv.field-delimiter'='|', 
'path'='/Users/r/fdata.csv', 
'csv.null-literal'='', 
'format'='csv' );


select
cast(key as decimal(10,2)) as c1,
cast(key as char(10)) as c2,
cast(key as varchar(10)) as c3
from src
order by c1, c2, c3
limit 1;

//result

"0E-18"|"0"|"0"

{code}

> NPE of 
> "org.apache.flink.table.data.util.DataFormatConverters$BigDecimalConverter.toExternalImpl(DataFormatConverters.java:680)"
> 
>
> Key: FLINK-18371
> URL: https://issues.apache.org/jira/browse/FLINK-18371
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
> Environment: I use the sql-gateway to run this sql.
> The environment is streaming.
> *The sql is:*
> CREATE TABLE `src` (
>   key bigint,
>   v varchar
> ) WITH (
>   'connector'='filesystem',
>   'csv.field-delimiter'='|',
>   
> 'path'='/defender_test_data/daily_regression_stream_hive_1.10/test_cast/sources/src.csv',
>   'csv.null-literal'='',
>   'format'='csv'
> )
> select
> cast(key as decimal(10,2)) as c1,
> cast(key as char(10)) as c2,
> cast(key as varchar(10)) as c3
> from src
> order by c1, c2, c3
> limit 1
> *The input data is:*
> 238|val_238
> 86|val_86
> 311|val_311
> 27|val_27
> 165|val_165
> 409|val_409
> 255|val_255
> 278|val_278
> 98|val_98
> 484|val_484
> 265|val_265
> 193|val_193
> 401|val_401
> 150|val_150
> 273|val_273
> 224|val_224
> 369|val_369
> 66|val_66
> 128|val_128
> 213|val_213
> 146|val_146
> 406|val_406
> 429|val_429
> 374|val_374
> 152|val_152
> 469|val_469
> 145|val_145
> 495|val_495
> 37|val_37
> 327|val_327
> 281|val_281
> 277|val_277
> 209|val_209
> 15|val_15
> 82|val_82
> 403|val_403
> 166|val_166
> 417|val_417
> 430|val_430
> 252|val_252
> 292|val_292
> 219|val_219
> 287|val_287
> 153|val_153
> 193|val_193
> 338|val_338
> 446|val_446
> 459|val_459
> 394|val_394
> 237|val_237
> 482|val_482
> 174|val_174
> 413|val_413
> 494|val_494
> 207|val_207
> 199|val_199
> 466|val_466
> 208|val_208
> 174|val_174
> 399|val_399
> 396|val_396
> 247|val_247
> 417|val_417
> 489|val_489
> 162|val_162
> 377|val_377
> 397|val_397
> 309|val_309
> 365|val_365
> 266|val_266
> 439|val_439
> 342|val_342
> 367|val_367
> 325|val_325
> 167|val_167
> 195|val_195
> 475|val_475
> 17|val_17
> 113|val_113
> 155|val_155
> 203|val_203
> 339|val_339
> 0|val_0
> 455|val_455
> 128|val_128
> 311|val_311
> 316|val_316
> 57|val_57
> 302|val_302
> 205|val_205
> 149|val_149
> 438|val_438
> 345|val_345
> 129|val_129
> 170|val_170
> 20|val_20
> 489|val_489
> 157|val_157
> 378|val_378
> 221|val_221
> 92|val_92
> 111|val_111
> 47|val_47
> 72|val_72
> 4|val_4
> 280|val_280
> 35|val_35
> 427|val_427
> 277|val_277
> 208|val_208
> 356|val_356
> 399|val_399
> 169|val_169
> 382|val_382
> 498|val_498
> 125|val_125
> 386|val_386
> 437|val_437
> 469|val_469
> 192|val_192
> 286|val_286
> 187|val_187
> 176|val_176
> 54|val_54
> 459|val_459
> 51|val_51
> 138|val_138
> 103|val_103
> 239|val_239
> 213|val_213
> 216|val_216
> 430|val_430
> 278|val_278
> 176|val_176
> 289|val_289
> 221|val_221
> 65|val_65
> 318|val_318
> 332|val_332
> 311|val_311
> 275|val_275
> 137|val_137
> 241|val_241
> 83|val_83
> 333|val_333
> 180|val_180
> 284|val_284
> 12|val_12
> 230|val_230
> 181|val_181
> 67|val_67
> 260|val_260
> 404|val_404
> 384|val_384
> 489|val_489
> 353|val_353
> 373|val_373
> 272|val_272
> 138|val_138
> 217|val_217
> 84|val_84
> 348|val_348
> 466|val_466
> 58|val_58
> 8|val_8
> 411|val_411
> 230|val_230
> 208|val_208
> 348|val_348
> 24|val_24
> 463|val_463
> 431|val_431
> 179|val_179
> 172|val_172
> 42|val_42
> 129|val_129
> 158|val_158
> 119|val_119
> 496|val_496
> 0|val_0
> 322|val_322
> 197|val_197
> 468|val_468
> 393|val_393
> 454|val_454
> 100|val_100
> 298|val_298
> 199|val_199
> 191|val_191
> 418|val_418
> 96|val_96
> 26|val_26
> 165|val_165
> 327|val_327
> 230|val_230
> 205|val_205
> 120|val_120
> 131|val_131
> 51|val_51
> 404|val_404
> 43|val_43
> 436|val_436
> 156|val_156
> 469|val_469
> 468|val_468
> 308|val_308
> 95|val_95
> 196|val_196
> 288|val_288
> 481|val_481
> 457|val_457
> 98|val_98
> 282|val_282
> 197|val_197
> 187|val_187
> 318|val_318
> 318|val_318
> 409|val_409
> 470|val_470
> 137|val_137
> 369|val_369
> 316|val_316
> 169|val_169
> 413|val_413
> 85|val_85
> 77|val_77
> 

[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444588046



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -189,50 +183,42 @@ val result = orders
 
 
 
-**Note**: State retention defined in a [query 
configuration](query_configuration.html) is not yet implemented for temporal 
joins.
-This means that the required state to compute the query result might grow 
infinitely depending on the number of distinct primary keys for the history 
table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 
中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join

Review comment:
   Fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444587512



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -63,18 +61,17 @@ WHERE o.id = s.orderId AND
   o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime
 {% endhighlight %}
 
-Compared to a regular join operation, this kind of join only supports 
append-only tables with time attributes. Since time attributes are 
quasi-monotonic increasing, Flink can remove old values from its state without 
affecting the correctness of the result.
+与常规 Join 操作相比,时间窗口 Join 只支持带有时间属性的递增表。由于时间属性是单调递增的,Flink 
可以从状态中移除过期的数据,而不会影响结果的正确性。
 
-Join with a Temporal Table Function
+临时表函数 Join
 --
 
-A join with a temporal table function joins an append-only table (left 
input/probe side) with a temporal table (right input/build side),
-i.e., a table that changes over time and tracks its changes. Please check the 
corresponding page for more information about [temporal 
tables](temporal_tables.html).
+临时表函数 Join 
连接了一个递增表(左输入/探针侧)和一个临时表(右输入/构建侧),即一个随时间变化且不断追踪其改动的表。请参考[临时表](temporal_tables.html)的相关章节查看更多细节。

Review comment:
   我也没想到更好的翻译了,确实‘构建侧’不是一个专业术语,您能帮忙想想这儿该怎么翻译好吗?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444587139



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -63,18 +61,17 @@ WHERE o.id = s.orderId AND
   o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime
 {% endhighlight %}
 
-Compared to a regular join operation, this kind of join only supports 
append-only tables with time attributes. Since time attributes are 
quasi-monotonic increasing, Flink can remove old values from its state without 
affecting the correctness of the result.
+与常规 Join 操作相比,时间窗口 Join 只支持带有时间属性的递增表。由于时间属性是单调递增的,Flink 
可以从状态中移除过期的数据,而不会影响结果的正确性。

Review comment:
   Fixed





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444586022



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -189,50 +183,43 @@ val result = orders
 
 
 
-**Note**: State retention defined in a [query 
configuration](query_configuration.html) is not yet implemented for temporal 
joins.
-This means that the required state to compute the query result might grow 
infinitely depending on the number of distinct primary keys for the history 
table.
+**注意**: 时态 Join中的 State 保留(在 [查询配置](query_configuration.html) 
中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
+
+### 基于 Processing-time 时态 Join
 
-### Processing-time Temporal Joins
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给时态表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 
的时态表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-With a processing-time time attribute, it is impossible to pass _past_ time 
attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a 
processing-time temporal table function will always return the latest known 
versions of the underlying table
-and any updates in the underlying history table will also immediately 
overwrite the current values.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-Only the latest versions (with respect to the defined primary key) of the 
build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join 
results.
+可以将 processing-time 的时态 Join 视作简单的哈希Map `HashMap `,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-One can think about a processing-time temporal join as a simple `HashMap` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous 
record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most 
recent/current state of the `HashMap`.
+### 基于 Event-time 时态 Join
 
-### Event-time Temporal Joins
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给时态表函数。这允许对两个表中在相同时间点的记录执行 Join 操作。

Review comment:
   已修改





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

2020-06-23 Thread GitBox


authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444584279



##
File path: docs/dev/table/streaming/joins.zh.md
##
@@ -22,37 +22,38 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to 
connect the rows of two relations. However, the semantics of joins on [dynamic 
tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 
的语义会难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using 
either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in 
[Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl 
}}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ 
site.baseurl }}/zh/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+
+
+常规 Join
 -
 
-Regular joins are the most generic type of join in which any new records or 
changes to either side of the join input are visible and are affecting the 
whole join result.
-For example, if there is a new record on the left side, it will be joined with 
all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧的表的任何更改都是可见的,并会影响最终整个 Join 
的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录进行 Join 运算。

Review comment:
   已修改





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12759: [FLINK-18397],Hi Jark Wu。I have translated 'Translate 'Table & SQL Connectors Overview' page into Chinese' document,please check。by L

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12759:
URL: https://github.com/apache/flink/pull/12759#issuecomment-648347014


   
   ## CI report:
   
   * 5d38f42fbfb91f633c20a788ab8b15f79c82f9d7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3980)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-12855) Stagger TumblingProcessingTimeWindow processing to distribute workload

2020-06-23 Thread Teng Hu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-12855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143330#comment-17143330
 ] 

Teng Hu commented on FLINK-12855:
-

The idea came from our own practice of [scaling up the 
application|https://youtu.be/9U8ksIqrgLM], also inspired by this 
[blogpost|https://klaviyo.tech/flinkperf-c7bd28acc67].

Yes, I agree naming could be confusing since there seems to be no standard way 
of defining those windowing functions in the industry. I actually like the idea 
of creating a new window type, but I want to hear more from rest of the 
community what do you guys think.

The event time window change is 
[out|https://github.com/apache/flink/pull/12640] and more to come.

Thanks,
Niel

> Stagger TumblingProcessingTimeWindow processing to distribute workload
> --
>
> Key: FLINK-12855
> URL: https://issues.apache.org/jira/browse/FLINK-12855
> Project: Flink
>  Issue Type: New Feature
>  Components: API / DataStream
>Reporter: Teng Hu
>Assignee: Teng Hu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
> Attachments: stagger_window.png, stagger_window_delay.png, 
> stagger_window_throughput.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Flink natively triggers all panes belonging to same window at the same time. 
> In other words, all panes are aligned and their triggers all fire 
> simultaneously, causing the thundering herd effect.
> This new feature provides the option that panes could be staggered across 
> partitioned streams, so that their workloads are distributed.
> Attachment: proof of concept working



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12758: [FLINK-18386][docs-zh] Translate "Print SQL Connector" page into Chinese

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12758:
URL: https://github.com/apache/flink/pull/12758#issuecomment-648346882


   
   ## CI report:
   
   * e14e00b7039fbac4800a8ef83d04e02bd621824c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3979)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #11245: [FLINK-15794][Kubernetes] Generate the Kubernetes default image version

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #11245:
URL: https://github.com/apache/flink/pull/11245#issuecomment-592002512


   
   ## CI report:
   
   * f1d4f9163a2cd60daa02fe6b7af5a459cee9660c UNKNOWN
   * 4d11da053e608d9d24e70c001fbb6b1469a392d7 UNKNOWN
   * e180236be81cbad2b936db2f57e764d1243e980c Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3976)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12746: [FLINK-15416][task][network] Retry connection to the upstream

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12746:
URL: https://github.com/apache/flink/pull/12746#issuecomment-647660048


   
   ## CI report:
   
   * 6d5d1c73dc7f1416ca1677bb6d4aa31c01f69190 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3931)
 
   * ebc49551875469c1f2851bc5bac22649cbaaa5d9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3975)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on a change in pull request #12722: [FLINK-18064][docs] Added unaligned checkpointing to docs.

2020-06-23 Thread GitBox


sjwiesman commented on a change in pull request #12722:
URL: https://github.com/apache/flink/pull/12722#discussion_r67855



##
File path: docs/ops/state/checkpoints.md
##
@@ -113,4 +113,50 @@ above).
 $ bin/flink run -s :checkpointMetaDataPath [:runArgs]
 {% endhighlight %}
 
+### Unaligned checkpoints
+
+Starting with Flink 1.11, checkpoints can be unaligned (experimental). 
+[Unaligned checkpoints]({% link concepts/stateful-stream-processing.md
+%}#unaligned-checkpointing) contain in-flight data (i.e., data stored in
+buffers) as part of the checkpoint state, which allows checkpoint barriers to
+overtake these buffers. Thus, the checkpoint duration becomes independent of 
the
+current throughput as checkpoint barriers are effectively not embedded into 
+the stream of data anymore.
+
+You should use unaligned checkpoints if your checkpointing durations are very
+high due to back-pressure. Then, checkpointing time becomes mostly
+independent of the end-to-end latency. Be aware unaligned checkpointing
+adds to I/O to the state backends, so you shouldn't use it when the I/O to
+the state backend is actually the bottleneck during checkpointing.
+
+We flagged unaligned checkpoints as experimental as it currently has the
+following shortcomings:
+
+- You cannot rescale from unaligned checkpoints. You have to take a savepoint 
+before rescaling. Savepoints are always aligned independent of the alignment

Review comment:
   Yes technically but its incidental. The community hasn't made any 
backward compat guaruntees around that behavior. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18194) Update Table API Walkthrough

2020-06-23 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman updated FLINK-18194:
-
Fix Version/s: 1.12.0
   1.11.0

> Update Table API Walkthrough
> 
>
> Key: FLINK-18194
> URL: https://issues.apache.org/jira/browse/FLINK-18194
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0, 1.12.0
>
>
> The Table API walkthrough is extremely outdated and at this point only works 
> with internal and deprecated methods! It needs to be updated to use modern 
> table api constructs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-18194) Update Table API Walkthrough

2020-06-23 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman resolved FLINK-18194.
--
Resolution: Fixed

> Update Table API Walkthrough
> 
>
> Key: FLINK-18194
> URL: https://issues.apache.org/jira/browse/FLINK-18194
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0, 1.12.0
>
>
> The Table API walkthrough is extremely outdated and at this point only works 
> with internal and deprecated methods! It needs to be updated to use modern 
> table api constructs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR

2020-06-23 Thread Seth Wiesman (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Seth Wiesman closed FLINK-18341.

Resolution: Fixed

> Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
> ---
>
> Key: FLINK-18341
> URL: https://issues.apache.org/jira/browse/FLINK-18341
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.12.0
>Reporter: Piotr Nowojski
>Assignee: Seth Wiesman
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652=logs=08866332-78f7-59e4-4f7e-49a56faa3179=931b3127-d6ee-5f94-e204-48d51cd1c334
> {noformat}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46]
>  cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment
>   bad class file: 
> /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class)
> class file has wrong version 55.0, should be 52.0
> Please remove or make sure it appears in the correct subdirectory of the 
> classpath.
> (...)
> [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 
> minutes and 4 seconds! Test exited with exit code 1
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18341) Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR

2020-06-23 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143246#comment-17143246
 ] 

Seth Wiesman commented on FLINK-18341:
--

Fixed in master: afebdc2f19a7e5439d9550bd8ffba609fba9a7dc

> Building Flink Walkthrough Table Java 0.1 COMPILATION ERROR
> ---
>
> Key: FLINK-18341
> URL: https://issues.apache.org/jira/browse/FLINK-18341
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API, Tests
>Affects Versions: 1.12.0
>Reporter: Piotr Nowojski
>Assignee: Seth Wiesman
>Priority: Critical
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3652=logs=08866332-78f7-59e4-4f7e-49a56faa3179=931b3127-d6ee-5f94-e204-48d51cd1c334
> {noformat}
> [ERROR] COMPILATION ERROR : 
> [INFO] -
> [ERROR] 
> /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22294375765/flink-walkthrough-table-java/src/main/java/org/apache/flink/walkthrough/SpendReport.java:[23,46]
>  cannot access org.apache.flink.table.api.bridge.java.BatchTableEnvironment
>   bad class file: 
> /home/vsts/work/1/.m2/repository/org/apache/flink/flink-table-api-java-bridge_2.11/1.12-SNAPSHOT/flink-table-api-java-bridge_2.11-1.12-SNAPSHOT.jar(org/apache/flink/table/api/bridge/java/BatchTableEnvironment.class)
> class file has wrong version 55.0, should be 52.0
> Please remove or make sure it appears in the correct subdirectory of the 
> classpath.
> (...)
> [FAIL] 'Walkthrough Table Java nightly end-to-end test' failed after 0 
> minutes and 4 seconds! Test exited with exit code 1
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18194) Update Table API Walkthrough

2020-06-23 Thread Seth Wiesman (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143245#comment-17143245
 ] 

Seth Wiesman commented on FLINK-18194:
--

Fixed in master: afebdc2f19a7e5439d9550bd8ffba609fba9a7dc
release-1.11: 1ebeb7e170e37d86acfed46744853d35a694e206

> Update Table API Walkthrough
> 
>
> Key: FLINK-18194
> URL: https://issues.apache.org/jira/browse/FLINK-18194
> Project: Flink
>  Issue Type: Improvement
>  Components: Documentation
>Reporter: Seth Wiesman
>Assignee: Seth Wiesman
>Priority: Major
>  Labels: pull-request-available
>
> The Table API walkthrough is extremely outdated and at this point only works 
> with internal and deprecated methods! It needs to be updated to use modern 
> table api constructs. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink-playgrounds] sjwiesman commented on pull request #14: [FLINK-18194][walkthrough] Update Table API Walkthrough

2020-06-23 Thread GitBox


sjwiesman commented on pull request #14:
URL: https://github.com/apache/flink-playgrounds/pull/14#issuecomment-648374780


   cc @pnowojski 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink-playgrounds] sjwiesman closed pull request #13: [FLINK-18194][walkthrough] Update Table API Walkthrough

2020-06-23 Thread GitBox


sjwiesman closed pull request #13:
URL: https://github.com/apache/flink-playgrounds/pull/13


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman closed pull request #12592: [FLINK-18194][walkthrough] Update Table API Walkthrough

2020-06-23 Thread GitBox


sjwiesman closed pull request #12592:
URL: https://github.com/apache/flink/pull/12592


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] sjwiesman commented on a change in pull request #12592: [FLINK-18194][walkthrough] Update Table API Walkthrough

2020-06-23 Thread GitBox


sjwiesman commented on a change in pull request #12592:
URL: https://github.com/apache/flink/pull/12592#discussion_r52641



##
File path: docs/try-flink/table_api.md
##
@@ -183,20 +183,22 @@ TableEnvironment tEnv = TableEnvironment.create(settings);
 {% endhighlight %}
 
 One of Flink's unique properties is that it provides consistent semantics 
across batch and streaming.
-This means you can develop and test applications in batch mode on static 
datasets, and deploy to production as streaming applications!
+This means you can develop and test applications in batch mode on static 
datasets, and deploy to production as streaming applications.
 
 ## Attempt One
 
 Now with the skeleton of a Job set-up, you are ready to add some business 
logic.
 The goal is to build a report that shows the total spend for each account 
across each hour of the day.
 This means the timestamp column needs be be rounded down from millisecond to 
hour granularity. 
 
-Just like a SQL query, Flink can select the required fields and group by your 
keys.
+Flink supports developing relational applications in pure [SQL]({% link 
dev/table/sql/index.md %}) or using the [Table API]({% link 
dev/table/tableApi.md %}).
+The Table API is a fluent DSL inspired, that can be written in Python, Java, 
or Scala and supports strong IDE integration.

Review comment:
   Whoops, yes nice catch





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12758: [FLINK-18386][docs-zh] Translate "Print SQL Connector" page into Chinese

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12758:
URL: https://github.com/apache/flink/pull/12758#issuecomment-648346882


   
   ## CI report:
   
   * e14e00b7039fbac4800a8ef83d04e02bd621824c Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3979)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12759: [FLINK-18397],Hi Jark Wu。I have translated 'Translate 'Table & SQL Connectors Overview' page into Chinese' document,please check。by L

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12759:
URL: https://github.com/apache/flink/pull/12759#issuecomment-648347014


   
   ## CI report:
   
   * 5d38f42fbfb91f633c20a788ab8b15f79c82f9d7 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3980)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12690: [FLINK-18186][doc] Various updates on standalone kubernetes document

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12690:
URL: https://github.com/apache/flink/pull/12690#issuecomment-645184574


   
   ## CI report:
   
   * 062f02d76814894b5c4723dad5618d837381d474 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3968)
 
   * 43be9f803bf3444c9c2218dd98efd24a277668ce Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3973)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12759: [FLINK-18397],Hi Jark Wu。I have translated 'Translate 'Table & SQL Connectors Overview' page into Chinese' document,please check。by LeoGq。

2020-06-23 Thread GitBox


flinkbot commented on pull request #12759:
URL: https://github.com/apache/flink/pull/12759#issuecomment-648347014


   
   ## CI report:
   
   * 5d38f42fbfb91f633c20a788ab8b15f79c82f9d7 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12758: [FLINK-18386][docs-zh] Translate "Print SQL Connector" page into Chinese

2020-06-23 Thread GitBox


flinkbot commented on pull request #12758:
URL: https://github.com/apache/flink/pull/12758#issuecomment-648346882


   
   ## CI report:
   
   * e14e00b7039fbac4800a8ef83d04e02bd621824c UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18397) Translate "Table & SQL Connectors Overview" page into Chinese

2020-06-23 Thread Leo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143209#comment-17143209
 ] 

Leo commented on FLINK-18397:
-

Hi Jark Wu
I created pull request for this translation after used by starting the build 
script in preview mode to verify changes.
Please check.If there is any problem, please inform me in time. I active 
support the modification.
Thank you.

> Translate "Table & SQL Connectors Overview" page into Chinese
> -
>
> Key: FLINK-18397
> URL: https://issues.apache.org/jira/browse/FLINK-18397
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: Leo
>Priority: Major
>  Labels: pull-request-available
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/
> The markdown file is located in flink/docs/dev/table/connectors/index.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12759: [FLINK-18397],Hi Jark Wu。I have translated 'Translate 'Table & SQL Connectors Overview' page into Chinese' document,please check。by LeoGq。

2020-06-23 Thread GitBox


flinkbot commented on pull request #12759:
URL: https://github.com/apache/flink/pull/12759#issuecomment-648341012


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 5d38f42fbfb91f633c20a788ab8b15f79c82f9d7 (Tue Jun 23 
18:33:38 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18397) Translate "Table & SQL Connectors Overview" page into Chinese

2020-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18397:
---
Labels: pull-request-available  (was: )

> Translate "Table & SQL Connectors Overview" page into Chinese
> -
>
> Key: FLINK-18397
> URL: https://issues.apache.org/jira/browse/FLINK-18397
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: Leo
>Priority: Major
>  Labels: pull-request-available
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/
> The markdown file is located in flink/docs/dev/table/connectors/index.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Leo1993Java opened a new pull request #12759: [FLINK-18397],Hi Jark Wu。I have translated 'Translate 'Table & SQL Connectors Overview' page into Chinese' document,please check。by LeoGq

2020-06-23 Thread GitBox


Leo1993Java opened a new pull request #12759:
URL: https://github.com/apache/flink/pull/12759


   …nnectors Overview' page into Chinese' document,please check。by LeoGq。
   
   
   
   ## What is the purpose of the change
   
   *(For example: This pull request makes task deployment go through the blob 
server, rather than through RPC. That way we avoid re-transferring them on each 
deployment (during recovery).)*
   
   
   ## Brief change log
   
   *(for example:)*
 - *The TaskInfo is stored in the blob store on job creation time as a 
persistent artifact*
 - *Deployments RPC transmits only the blob storage reference*
 - *TaskManagers retrieve the TaskInfo from the blob cache*
   
   
   ## Verifying this change
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe 
tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
 - *Added integration tests for end-to-end deployment with large payloads 
(100MB)*
 - *Extended integration test for recovery after master (JobManager) 
failure*
 - *Added test that validates that TaskInfo is transferred only once across 
recoveries*
 - *Manually verified the change by running a 4 node cluser with 2 
JobManagers and 4 TaskManagers, a stateful streaming program, and killing one 
JobManager and two TaskManagers during the execution, verifying that recovery 
happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (yes / no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (yes / no)
 - The serializers: (yes / no / don't know)
 - The runtime per-record code paths (performance sensitive): (yes / no / 
don't know)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't 
know)
 - The S3 file system connector: (yes / no / don't know)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (yes / no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12758: [FLINK-18386][docs-zh] Translate "Print SQL Connector" page into Chinese

2020-06-23 Thread GitBox


flinkbot commented on pull request #12758:
URL: https://github.com/apache/flink/pull/12758#issuecomment-648336926


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit e14e00b7039fbac4800a8ef83d04e02bd621824c (Tue Jun 23 
18:25:14 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18386) Translate "Print SQL Connector" page into Chinese

2020-06-23 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18386:
---
Labels: pull-request-available  (was: )

> Translate "Print SQL Connector" page into Chinese
> -
>
> Key: FLINK-18386
> URL: https://issues.apache.org/jira/browse/FLINK-18386
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Table SQL / Ecosystem
>Reporter: Jark Wu
>Assignee: houmaozheng
>Priority: Major
>  Labels: pull-request-available
>
> The page url is 
> https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/print.html
> The markdown file is located in flink/docs/dev/table/connectors/print.zh.md



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] houmaozheng opened a new pull request #12758: [FLINK-18386][docs-zh] Translate "Print SQL Connector" page into Chinese

2020-06-23 Thread GitBox


houmaozheng opened a new pull request #12758:
URL: https://github.com/apache/flink/pull/12758


   
   
   ## What is the purpose of the change
   
   Translate "Print SQL Connector" page into Chinese.
   
   The page url is 
https://ci.apache.org/projects/flink/flink-docs-master/zh/dev/table/connectors/print.html
   
   The markdown file is located in flink/docs/dev/table/connectors/print.zh.md
   
   
   ## Brief change log
   
   -translate 'flink/docs/dev/table/connectors/print.zh.md'.
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`:no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector:no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? no
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12714: [FLINK-15467][task] Wait for invokable cancellation when stopping Task

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12714:
URL: https://github.com/apache/flink/pull/12714#issuecomment-646493259


   
   ## CI report:
   
   * 0d85f9ed9df65e13533442c95cc904ee1093aa16 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3965)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12757: [FLINK-18351][table/client] fix ModuleManager creates a lot of duplic…

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12757:
URL: https://github.com/apache/flink/pull/12757#issuecomment-648155589


   
   ## CI report:
   
   * 028c02375ad8f494a42f975a503b5cfa0d7e79bd Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3964)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12756: [FLINK-18296][json] add support for TIMESTAMP_WITH_LOCAL_ZONE and fix…

2020-06-23 Thread GitBox


flinkbot edited a comment on pull request #12756:
URL: https://github.com/apache/flink/pull/12756#issuecomment-648131818


   
   ## CI report:
   
   * 577fbda18ee035db9e05432738a1450c52fdf3e2 Azure: 
[CANCELED](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3962)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >