[jira] [Closed] (FLINK-12428) Translate the "Event Time" page into Chinese

2020-06-28 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-12428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-12428.
--
Fix Version/s: (was: 1.10.2)
   (was: 1.11.0)
   1.12.0
   Resolution: Fixed

Merged to master in 1805f38a810b78cd8d71e57e068eb342984ddb13

> Translate the "Event Time" page into Chinese
> 
>
> Key: FLINK-12428
> URL: https://issues.apache.org/jira/browse/FLINK-12428
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.9.0
>Reporter: YangFei
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> file locate  flink/docs/dev/event_time.zh.md
> [https://ci.apache.org/projects/flink/flink-docs-master/dev/event_time.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-16976) Update chinese documentation for ListCheckpointed deprecation

2020-06-28 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-16976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-16976.
--
Fix Version/s: (was: 1.11.0)
   1.12.0
   Resolution: Fixed

Merged to master in 41daafd074f6df6529ef4e830dde67295c29a8fa

> Update chinese documentation for ListCheckpointed deprecation
> -
>
> Key: FLINK-16976
> URL: https://issues.apache.org/jira/browse/FLINK-16976
> Project: Flink
>  Issue Type: Bug
>  Components: chinese-translation, Documentation
>Reporter: Aljoscha Krettek
>Assignee: zhangzhanhua
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> The change for the english documentation is in 
> https://github.com/apache/flink/commit/10aadfc6906a1629f7e60eacf087e351ba40d517
> The original Jira issue is FLINK-6258.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger closed pull request #12621: [FLINK-16976][docs-zh] Update chinese documentation for ListCheckpoin…

2020-06-28 Thread GitBox


rmetzger closed pull request #12621:
URL: https://github.com/apache/flink/pull/12621


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger closed pull request #12442: [FLINK-12428][docs-zh] Translate the "Event Time" page into Chinese

2020-06-28 Thread GitBox


rmetzger closed pull request #12442:
URL: https://github.com/apache/flink/pull/12442


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18427) Job failed under java 11

2020-06-28 Thread Zhang Hao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147497#comment-17147497
 ] 

Zhang Hao edited comment on FLINK-18427 at 6/29/20, 5:51 AM:
-

[~sewen], according to your suggestion, I adjuested the off-heap setting, job 
run normally.Thank you.

In addition, I have two questions, .
 # What's the difference between 'taskmanager.memory.task.off-heap.size' and 
'taskmanager.memory.framework.off-heap.size'? I found that either 'task' or 
'framework' setting is ok for my job, which one should I use?
 # According to webUI->Task Manager->Metrics,there are some memory metircs 
information, I want to know, Outside JVM(Type=Direct) is task's off-heap?If so, 
how to understand Capacity field?

!image-2020-06-29-13-49-17-756.png!


was (Author: simahao):
[~sewen], according to your suggestion, I adjuested the off-heap setting, job 
run normally.Thank you.

In addition, I have two questions, .
 # What's the difference between 'taskmanager.memory.task.off-heap.size' and 
'taskmanager.memory.framework.off-heap.size'? I found that either 'task' or 
'framework' setting is ok for my job, which one should I use?.Flink doc says 
that 'framework' is advanced option, I also want to know that flink how to use 
this two options.
 # If I use either 'framewok' or 'task', I want to know how big should I set? 
How to monitor available off-heap size according to webUI or any tools else?

> Job failed under java 11
> 
>
> Key: FLINK-18427
> URL: https://issues.apache.org/jira/browse/FLINK-18427
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration, Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Zhang Hao
>Priority: Critical
> Attachments: image-2020-06-29-13-49-17-756.png
>
>
> flink version:1.10.0
> deployment mode:cluster
> os:linux redhat7.5
> Job parallelism:greater than 1
> My job run normally under java 8, but failed under java 11.Excpetion info 
> like below,netty send message failed.In addition, I found job would failed 
> when task was distributed on multi node, if I set job's parallelism = 1, job 
> run normally under java 11 too.
>  
> 2020-06-24 09:52:162020-06-24 
> 09:52:16org.apache.flink.runtime.io.network.netty.exception.LocalTransportException:
>  Sending the partition request to '/170.0.50.19:33320' failed. at 
> org.apache.flink.runtime.io.network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:124)
>  at 
> org.apache.flink.runtime.io.network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:115)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:500)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:474)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:413)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:538)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:531)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:111)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.internal.PromiseNotificationUtil.tryFailure(PromiseNotificationUtil.java:64)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.notifyOutboundHandlerException(AbstractChannelHandlerContext.java:818)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:718)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:708)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.access$1700(AbstractChannelHandlerContext.java:56)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1102)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1149)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1073)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:416)
>  at 
> 

[GitHub] [flink] rmetzger commented on pull request #12621: [FLINK-16976][docs-zh] Update chinese documentation for ListCheckpoin…

2020-06-28 Thread GitBox


rmetzger commented on pull request #12621:
URL: https://github.com/apache/flink/pull/12621#issuecomment-650923727


   Thanks for the review. Merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #12442: [FLINK-12428][docs-zh] Translate the "Event Time" page into Chinese

2020-06-28 Thread GitBox


rmetzger commented on pull request #12442:
URL: https://github.com/apache/flink/pull/12442#issuecomment-650923694


   Thanks for the review. Merging.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12786: [FLINK-18187] Backport PubSub e2e test instabilities fix to 1.11

2020-06-28 Thread GitBox


flinkbot commented on pull request #12786:
URL: https://github.com/apache/flink/pull/12786#issuecomment-650922165


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 3e6fef5f0a4a3ad6793e63094f697fb53758b9c6 (Mon Jun 29 
05:44:58 UTC 2020)
   
   **Warnings:**
* **1 pom.xml files were touched**: Check for build and licensing issues.
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18187) CheckPubSubEmulatorTest failed on azure

2020-06-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18187:
---
Labels: pull-request-available  (was: )

> CheckPubSubEmulatorTest failed on azure
> ---
>
> Key: FLINK-18187
> URL: https://issues.apache.org/jira/browse/FLINK-18187
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub, Tests
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Assignee: Robert Metzger
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2930=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2930=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
>  
>  
> {code:java}
> 2020-06-08T12:45:15.9874996Z 82609 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest [] - 
> Waiting a while to receive the m
>  essage...
> *2020-06-08T12:45:16.1955546Z 82816 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest [] - 
> Timeout during shutdown
> *2020-06-08T12:45:16.1956405Z java.util.concurrent.TimeoutException: Timed 
> out waiting for InnerService [STOPPING] to reach a terminal state. Current 
> state: ST*OPPING
> ...
> 2020-06-08T12:46:08.5914230Z 135213 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.emulator.GCloudEmulatorManager
>  [] -
>  2020-06-08T12:46:08.6054783Z [ERROR] Tests run: 2, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 54.754 s <<< FAILURE! - in 
> org.apache.flink.streaming.con nectors.gcp.pubsub.CheckPubSubEmulatorTest
>  2020-06-08T12:46:08.6062906Z [ERROR] 
> testPull(org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest)
>  Time elapsed: 52.123 s <<< FAILURE!
>  2020-06-08T12:46:08.6063659Z java.lang.AssertionError: expected:<1> but 
> was:<0>
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] rmetzger opened a new pull request #12786: [FLINK-18187] Backport PubSub e2e test instabilities fix to 1.11

2020-06-28 Thread GitBox


rmetzger opened a new pull request #12786:
URL: https://github.com/apache/flink/pull/12786


   ## What is the purpose of the change
   
   The instabilities in the PubSub e2e have been fixed successfully as part of 
FLINK-16572 (but only for the master branch). This PR backports the respective 
commits to `release-1.11`.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger closed pull request #12785: Backport PubSub e2e test instabilities to 1.11

2020-06-28 Thread GitBox


rmetzger closed pull request #12785:
URL: https://github.com/apache/flink/pull/12785


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger opened a new pull request #12785: Backport PubSub e2e test instabilities to 1.11

2020-06-28 Thread GitBox


rmetzger opened a new pull request #12785:
URL: https://github.com/apache/flink/pull/12785


   
   ## What is the purpose of the change
   
   The instabilities in the PubSub e2e have been fixed successfully as part of 
FLINK-16572 (but only for the master branch). This PR backports the respective 
commits to `release-1.11`.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18187) CheckPubSubEmulatorTest failed on azure

2020-06-28 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147559#comment-17147559
 ] 

Robert Metzger commented on FLINK-18187:


[~pnowojski]: Yes, I'll backport to the 1.11 branch.

> CheckPubSubEmulatorTest failed on azure
> ---
>
> Key: FLINK-18187
> URL: https://issues.apache.org/jira/browse/FLINK-18187
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub, Tests
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Assignee: Robert Metzger
>Priority: Critical
> Fix For: 1.12.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2930=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2930=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
>  
>  
> {code:java}
> 2020-06-08T12:45:15.9874996Z 82609 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest [] - 
> Waiting a while to receive the m
>  essage...
> *2020-06-08T12:45:16.1955546Z 82816 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest [] - 
> Timeout during shutdown
> *2020-06-08T12:45:16.1956405Z java.util.concurrent.TimeoutException: Timed 
> out waiting for InnerService [STOPPING] to reach a terminal state. Current 
> state: ST*OPPING
> ...
> 2020-06-08T12:46:08.5914230Z 135213 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.emulator.GCloudEmulatorManager
>  [] -
>  2020-06-08T12:46:08.6054783Z [ERROR] Tests run: 2, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 54.754 s <<< FAILURE! - in 
> org.apache.flink.streaming.con nectors.gcp.pubsub.CheckPubSubEmulatorTest
>  2020-06-08T12:46:08.6062906Z [ERROR] 
> testPull(org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest)
>  Time elapsed: 52.123 s <<< FAILURE!
>  2020-06-08T12:46:08.6063659Z java.lang.AssertionError: expected:<1> but 
> was:<0>
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-18187) CheckPubSubEmulatorTest failed on azure

2020-06-28 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18187?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger reassigned FLINK-18187:
--

Assignee: Robert Metzger

> CheckPubSubEmulatorTest failed on azure
> ---
>
> Key: FLINK-18187
> URL: https://issues.apache.org/jira/browse/FLINK-18187
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Google Cloud PubSub, Tests
>Affects Versions: 1.11.0
>Reporter: Roman Khachatryan
>Assignee: Robert Metzger
>Priority: Critical
> Fix For: 1.12.0
>
>
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2930=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
> [https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2930=logs=c88eea3b-64a0-564d-0031-9fdcd7b8abee=1e2bbe5b-4657-50be-1f07-d84bfce5b1f5]
>  
>  
> {code:java}
> 2020-06-08T12:45:15.9874996Z 82609 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest [] - 
> Waiting a while to receive the m
>  essage...
> *2020-06-08T12:45:16.1955546Z 82816 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest [] - 
> Timeout during shutdown
> *2020-06-08T12:45:16.1956405Z java.util.concurrent.TimeoutException: Timed 
> out waiting for InnerService [STOPPING] to reach a terminal state. Current 
> state: ST*OPPING
> ...
> 2020-06-08T12:46:08.5914230Z 135213 [main] INFO 
> org.apache.flink.streaming.connectors.gcp.pubsub.emulator.GCloudEmulatorManager
>  [] -
>  2020-06-08T12:46:08.6054783Z [ERROR] Tests run: 2, Failures: 1, Errors: 0, 
> Skipped: 0, Time elapsed: 54.754 s <<< FAILURE! - in 
> org.apache.flink.streaming.con nectors.gcp.pubsub.CheckPubSubEmulatorTest
>  2020-06-08T12:46:08.6062906Z [ERROR] 
> testPull(org.apache.flink.streaming.connectors.gcp.pubsub.CheckPubSubEmulatorTest)
>  Time elapsed: 52.123 s <<< FAILURE!
>  2020-06-08T12:46:08.6063659Z java.lang.AssertionError: expected:<1> but 
> was:<0>
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12774: [FLINK-18395][FLINK-18388][docs-zh] Translate "ORC Format" and "CSV Format" page into Chinese

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12774:
URL: https://github.com/apache/flink/pull/12774#issuecomment-650206487


   
   ## CI report:
   
   * 7ade04cbee6e83c3bbd675a232f71f2719226903 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4051)
 
   * 671da5238ac4dd548a0e1602cd2205a3bce417c3 UNKNOWN
   * f91c53fad2b2e26aa6ac06302917f8ecd58a6bf4 UNKNOWN
   * f593984f99677c6e731d3d981cf3df6ebf7f07bd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-13553) KvStateServerHandlerTest.readInboundBlocking unstable on Travis

2020-06-28 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-13553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147558#comment-17147558
 ] 

Robert Metzger commented on FLINK-13553:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4063=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20

> KvStateServerHandlerTest.readInboundBlocking unstable on Travis
> ---
>
> Key: FLINK-13553
> URL: https://issues.apache.org/jira/browse/FLINK-13553
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Queryable State
>Affects Versions: 1.10.0, 1.11.0
>Reporter: Till Rohrmann
>Assignee: Chesnay Schepler
>Priority: Critical
>  Labels: pull-request-available, test-stability
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{KvStateServerHandlerTest.readInboundBlocking}} and 
> {{KvStateServerHandlerTest.testQueryExecutorShutDown}} fail on Travis with a 
> {{TimeoutException}}.
> https://api.travis-ci.org/v3/job/566420641/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18374) HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart produced no output for 900 seconds

2020-06-28 Thread Robert Metzger (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147557#comment-17147557
 ] 

Robert Metzger commented on FLINK-18374:


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4093=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=34f486e1-e1e4-5dd2-9c06-bfdd9b9c74a8

> HadoopS3RecoverableWriterITCase.testRecoverAfterMultiplePersistsStateWithMultiPart
>  produced no output for 900 seconds
> -
>
> Key: FLINK-18374
> URL: https://issues.apache.org/jira/browse/FLINK-18374
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / FileSystem
>Affects Versions: 1.12.0
>Reporter: Piotr Nowojski
>Priority: Major
>  Labels: test-stability
> Fix For: 1.12.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=3792=logs=d44f43ce-542c-597d-bf94-b0718c71e5e8=03dca39c-73e8-5aaf-601d-328ae5c35f20
> {noformat}
> Process produced no output for 900 seconds.
> {noformat}
> There is attached thread dump in the azure link.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] kakaroto928 commented on pull request #12774: [FLINK-18395][FLINK-18388][docs-zh] Translate "ORC Format" and "CSV Format" page into Chinese

2020-06-28 Thread GitBox


kakaroto928 commented on pull request #12774:
URL: https://github.com/apache/flink/pull/12774#issuecomment-650911363


   hi , Jark Wu
   i have just finished modifications in my branch. and now if i should pull a 
new request or u can already merge those changes ? 
   thanks a lot
   Weibo PENG



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18044) Add the subtask index information to the SourceReaderContext.

2020-06-28 Thread Jiangjie Qin (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18044?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147554#comment-17147554
 ] 

Jiangjie Qin commented on FLINK-18044:
--

[~liufangliang] This ticket actually modifies the public API, so we need to 
follow the FLIP process. Given that this is a subtask of FLIP-27. I'll send 
email to the voting thread to see if anyone objects to the change.

> Add the subtask index information to the SourceReaderContext.
> -
>
> Key: FLINK-18044
> URL: https://issues.apache.org/jira/browse/FLINK-18044
> Project: Flink
>  Issue Type: Sub-task
>  Components: Connectors / Common
>Reporter: Jiangjie Qin
>Priority: Major
>  Labels: pull-request-available
>
> It is useful for the `SourceReader` to retrieve its subtask id. For example, 
> Kafka readers can create a consumer with proper client id.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12774: [FLINK-18395][FLINK-18388][docs-zh] Translate "ORC Format" and "CSV Format" page into Chinese

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12774:
URL: https://github.com/apache/flink/pull/12774#issuecomment-650206487


   
   ## CI report:
   
   * 7ade04cbee6e83c3bbd675a232f71f2719226903 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4051)
 
   * 671da5238ac4dd548a0e1602cd2205a3bce417c3 UNKNOWN
   * f91c53fad2b2e26aa6ac06302917f8ecd58a6bf4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] kakaroto928 commented on pull request #12774: [FLINK-18395][FLINK-18388][docs-zh] Translate "ORC Format" and "CSV Format" page into Chinese

2020-06-28 Thread GitBox


kakaroto928 commented on pull request #12774:
URL: https://github.com/apache/flink/pull/12774#issuecomment-650901416


   thanks a lot Jark Wu. I will modify these contents asap!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12774: [FLINK-18395][FLINK-18388][docs-zh] Translate "ORC Format" and "CSV Format" page into Chinese

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12774:
URL: https://github.com/apache/flink/pull/12774#issuecomment-650206487


   
   ## CI report:
   
   * 7ade04cbee6e83c3bbd675a232f71f2719226903 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4051)
 
   * 671da5238ac4dd548a0e1602cd2205a3bce417c3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] becketqin commented on pull request #12566: [FLINK-17761][connector/common] Add a constructor taking capacity as a parameter for `FutureCompletingBlockingQueue`

2020-06-28 Thread GitBox


becketqin commented on pull request #12566:
URL: https://github.com/apache/flink/pull/12566#issuecomment-650900201


   @pyscala Thanks for the patch. LGTM. Can we add a unit test for this?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12777: [FLINK-18262][python][e2e] Fix the unstable e2e tests of pyflink.

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12777:
URL: https://github.com/apache/flink/pull/12777#issuecomment-650680478


   
   ## CI report:
   
   * ef52a75fdbcddde7ac55eea60d1b6db3bbcd6056 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4066)
 
   * 0fdc62c1721c484c82a7c1732ba3cc56953823df Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4097)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] KarmaGYZ commented on pull request #12715: [FLINK-18361] Support username and password options for new Elasticse…

2020-06-28 Thread GitBox


KarmaGYZ commented on pull request #12715:
URL: https://github.com/apache/flink/pull/12715#issuecomment-650886243


   I have manually tested it with ES 6 and 7.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12777: [FLINK-18262][python][e2e] Fix the unstable e2e tests of pyflink.

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12777:
URL: https://github.com/apache/flink/pull/12777#issuecomment-650680478


   
   ## CI report:
   
   * ef52a75fdbcddde7ac55eea60d1b6db3bbcd6056 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4066)
 
   * 0fdc62c1721c484c82a7c1732ba3cc56953823df UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18444) KafkaITCase.testMultipleSourcesOnePartition failed with "Failed to send data to Kafka: Failed to send data to Kafka: This server does not host this topic-partition"

2020-06-28 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-18444:

Labels: test-stability  (was: )

> KafkaITCase.testMultipleSourcesOnePartition failed with "Failed to send data 
> to Kafka: Failed to send data to Kafka: This server does not host this 
> topic-partition"
> 
>
> Key: FLINK-18444
> URL: https://issues.apache.org/jira/browse/FLINK-18444
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Kafka, Tests
>Affects Versions: 1.12.0
>Reporter: Dian Fu
>Priority: Major
>  Labels: test-stability
>
> Instance on master: 
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4092=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f
> {code}
> 2020-06-28T21:37:54.8113215Z [ERROR] 
> testMultipleSourcesOnePartition(org.apache.flink.streaming.connectors.kafka.KafkaITCase)
>   Time elapsed: 5.079 s  <<< ERROR!
> 2020-06-28T21:37:54.8113885Z 
> org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
> 2020-06-28T21:37:54.8114418Z  at 
> org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
> 2020-06-28T21:37:54.8114905Z  at 
> org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:677)
> 2020-06-28T21:37:54.8115397Z  at 
> org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:81)
> 2020-06-28T21:37:54.8116254Z  at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1699)
> 2020-06-28T21:37:54.8116857Z  at 
> org.apache.flink.streaming.connectors.kafka.testutils.DataGenerators.generateRandomizedIntegerSequence(DataGenerators.java:120)
> 2020-06-28T21:37:54.8117715Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runMultipleSourcesOnePartitionExactlyOnceTest(KafkaConsumerTestBase.java:933)
> 2020-06-28T21:37:54.8118327Z  at 
> org.apache.flink.streaming.connectors.kafka.KafkaITCase.testMultipleSourcesOnePartition(KafkaITCase.java:107)
> 2020-06-28T21:37:54.8118805Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> 2020-06-28T21:37:54.8119859Z  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> 2020-06-28T21:37:54.8120861Z  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> 2020-06-28T21:37:54.8121436Z  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
> 2020-06-28T21:37:54.8121899Z  at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
> 2020-06-28T21:37:54.8122424Z  at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
> 2020-06-28T21:37:54.8122942Z  at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
> 2020-06-28T21:37:54.8123406Z  at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
> 2020-06-28T21:37:54.8123899Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
> 2020-06-28T21:37:54.8124507Z  at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
> 2020-06-28T21:37:54.8124978Z  at 
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> 2020-06-28T21:37:54.8125332Z  at 
> java.base/java.lang.Thread.run(Thread.java:834)
> 2020-06-28T21:37:54.8125743Z Caused by: 
> org.apache.flink.runtime.JobException: Recovery is suppressed by 
> NoRestartBackoffTimeStrategy
> 2020-06-28T21:37:54.8126305Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
> 2020-06-28T21:37:54.8126961Z  at 
> org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
> 2020-06-28T21:37:54.8127766Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
> 2020-06-28T21:37:54.8128570Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
> 2020-06-28T21:37:54.8129140Z  at 
> org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
> 2020-06-28T21:37:54.8129686Z  at 
> org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:506)
> 2020-06-28T21:37:54.8130174Z  at 
> org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
> 

[jira] [Created] (FLINK-18444) KafkaITCase.testMultipleSourcesOnePartition failed with "Failed to send data to Kafka: Failed to send data to Kafka: This server does not host this topic-partition"

2020-06-28 Thread Dian Fu (Jira)
Dian Fu created FLINK-18444:
---

 Summary: KafkaITCase.testMultipleSourcesOnePartition failed with 
"Failed to send data to Kafka: Failed to send data to Kafka: This server does 
not host this topic-partition"
 Key: FLINK-18444
 URL: https://issues.apache.org/jira/browse/FLINK-18444
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Tests
Affects Versions: 1.12.0
Reporter: Dian Fu


Instance on master: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4092=logs=ce8f3cc3-c1ea-5281-f5eb-df9ebd24947f

{code}
2020-06-28T21:37:54.8113215Z [ERROR] 
testMultipleSourcesOnePartition(org.apache.flink.streaming.connectors.kafka.KafkaITCase)
  Time elapsed: 5.079 s  <<< ERROR!
2020-06-28T21:37:54.8113885Z 
org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
2020-06-28T21:37:54.8114418Zat 
org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
2020-06-28T21:37:54.8114905Zat 
org.apache.flink.runtime.minicluster.MiniCluster.executeJobBlocking(MiniCluster.java:677)
2020-06-28T21:37:54.8115397Zat 
org.apache.flink.streaming.util.TestStreamEnvironment.execute(TestStreamEnvironment.java:81)
2020-06-28T21:37:54.8116254Zat 
org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1699)
2020-06-28T21:37:54.8116857Zat 
org.apache.flink.streaming.connectors.kafka.testutils.DataGenerators.generateRandomizedIntegerSequence(DataGenerators.java:120)
2020-06-28T21:37:54.8117715Zat 
org.apache.flink.streaming.connectors.kafka.KafkaConsumerTestBase.runMultipleSourcesOnePartitionExactlyOnceTest(KafkaConsumerTestBase.java:933)
2020-06-28T21:37:54.8118327Zat 
org.apache.flink.streaming.connectors.kafka.KafkaITCase.testMultipleSourcesOnePartition(KafkaITCase.java:107)
2020-06-28T21:37:54.8118805Zat 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2020-06-28T21:37:54.8119859Zat 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2020-06-28T21:37:54.8120861Zat 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-06-28T21:37:54.8121436Zat 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
2020-06-28T21:37:54.8121899Zat 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
2020-06-28T21:37:54.8122424Zat 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2020-06-28T21:37:54.8122942Zat 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
2020-06-28T21:37:54.8123406Zat 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2020-06-28T21:37:54.8123899Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
2020-06-28T21:37:54.8124507Zat 
org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
2020-06-28T21:37:54.8124978Zat 
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
2020-06-28T21:37:54.8125332Zat 
java.base/java.lang.Thread.run(Thread.java:834)
2020-06-28T21:37:54.8125743Z Caused by: org.apache.flink.runtime.JobException: 
Recovery is suppressed by NoRestartBackoffTimeStrategy
2020-06-28T21:37:54.8126305Zat 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:116)
2020-06-28T21:37:54.8126961Zat 
org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:78)
2020-06-28T21:37:54.8127766Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
2020-06-28T21:37:54.8128570Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:185)
2020-06-28T21:37:54.8129140Zat 
org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:179)
2020-06-28T21:37:54.8129686Zat 
org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:506)
2020-06-28T21:37:54.8130174Zat 
org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:386)
2020-06-28T21:37:54.8130717Zat 
jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
2020-06-28T21:37:54.8131254Zat 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2020-06-28T21:37:54.8131954Zat 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
2020-06-28T21:37:54.8132626Zat 
org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:284)

[jira] [Closed] (FLINK-18439) Update sql client jar url in docs

2020-06-28 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu closed FLINK-18439.
---
Resolution: Fixed

- master (1.12.0): f809d8a9374ef3a2be2be355bd1c78d81bc89d75
- 1.11.0: 4882cc6ddbe1353a7c167e0662162da0b3fb

> Update sql client jar url in docs
> -
>
> Key: FLINK-18439
> URL: https://issues.apache.org/jira/browse/FLINK-18439
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> the sql client jar url should be:
> [https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch6]
>  ...
> but current is :
> [https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-elasticsearch6...]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] wuchong merged pull request #12783: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


wuchong merged pull request #12783:
URL: https://github.com/apache/flink/pull/12783


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] wuchong merged pull request #12782: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


wuchong merged pull request #12782:
URL: https://github.com/apache/flink/pull/12782


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18433) From the end-to-end performance test results, 1.11 has a regression

2020-06-28 Thread Yu Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147499#comment-17147499
 ] 

Yu Li commented on FLINK-18433:
---

bq. The last commit corresponding to the test package is 
e13146f80114266aa34c9fe9f3dc27e87f7a7649
Thanks for the confirmation [~Aihua]. From the commit history FLINK-17800 was 
reverted thus not included in your test.
{noformat}
* e13146f801 - [FLINK-18242][state-backend-rocksdb] Remove the deprecated 
OptionsFactory and related classes
* 4a6825b843 - [FLINK-18290][checkpointing] Don't System.exit on 
CheckpointCoordinator failure if it is shut down
* 258a01e718 - [FLINK-18299][json] Fix the non SQL standard timestamp format in 
JSON format
* 9e20929dbd - [FLINK-18072][hbase] Fix HBaseLookupFunction can not work with 
new internal data structure RowData
* e7c634f223 -  [FLINK-18238][checkpoint] Broadcast CancelCheckpointMarker 
while executing checkpoint aborted by coordinator RPC
* cdac5e32eb - Revert "[FLINK-17800][roksdb] Ensure total order seek to avoid 
user misuse"
* 6e367fb06a - Revert "[FLINK-17800][roksdb] Support customized RocksDB 
write/read options and use RocksDBResourceContainer to get them"
{noformat}

> From the end-to-end performance test results, 1.11 has a regression
> ---
>
> Key: FLINK-18433
> URL: https://issues.apache.org/jira/browse/FLINK-18433
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, API / DataStream
>Affects Versions: 1.11.0
> Environment: 3 machines
> [|https://github.com/Li-Aihua/flink/blob/test_suite_for_basic_operations_1.11/flink-end-to-end-perf-tests/flink-basic-operations/src/main/java/org/apache/flink/basic/operations/PerformanceTestJob.java]
>Reporter: Aihua Li
>Priority: Major
> Attachments: flink_11.log.gz
>
>
>  
> I ran end-to-end performance tests between the Release-1.10 and Release-1.11. 
> the results were as follows:
> |scenarioName|release-1.10|release-1.11| |
> |OneInput_Broadcast_LazyFromSource_ExactlyOnce_10_rocksdb|46.175|43.8133|-5.11%|
> |OneInput_Rescale_LazyFromSource_ExactlyOnce_100_heap|211.835|200.355|-5.42%|
> |OneInput_Rebalance_LazyFromSource_ExactlyOnce_1024_rocksdb|1721.041667|1618.32|-5.97%|
> |OneInput_KeyBy_LazyFromSource_ExactlyOnce_10_heap|46|43.615|-5.18%|
> |OneInput_Broadcast_Eager_ExactlyOnce_100_rocksdb|212.105|199.688|-5.85%|
> |OneInput_Rescale_Eager_ExactlyOnce_1024_heap|1754.64|1600.12|-8.81%|
> |OneInput_Rebalance_Eager_ExactlyOnce_10_rocksdb|45.9167|43.0983|-6.14%|
> |OneInput_KeyBy_Eager_ExactlyOnce_100_heap|212.0816667|200.727|-5.35%|
> |OneInput_Broadcast_LazyFromSource_AtLeastOnce_1024_rocksdb|1718.245|1614.381667|-6.04%|
> |OneInput_Rescale_LazyFromSource_AtLeastOnce_10_heap|46.12|43.5517|-5.57%|
> |OneInput_Rebalance_LazyFromSource_AtLeastOnce_100_rocksdb|212.038|200.388|-5.49%|
> |OneInput_KeyBy_LazyFromSource_AtLeastOnce_1024_heap|1762.048333|1606.408333|-8.83%|
> |OneInput_Broadcast_Eager_AtLeastOnce_10_rocksdb|46.0583|43.4967|-5.56%|
> |OneInput_Rescale_Eager_AtLeastOnce_100_heap|212.233|201.188|-5.20%|
> |OneInput_Rebalance_Eager_AtLeastOnce_1024_rocksdb|1720.66|1616.85|-6.03%|
> |OneInput_KeyBy_Eager_AtLeastOnce_10_heap|46.14|43.6233|-5.45%|
> |TwoInputs_Broadcast_LazyFromSource_ExactlyOnce_100_rocksdb|156.918|152.957|-2.52%|
> |TwoInputs_Rescale_LazyFromSource_ExactlyOnce_1024_heap|1415.511667|1300.1|-8.15%|
> |TwoInputs_Rebalance_LazyFromSource_ExactlyOnce_10_rocksdb|34.2967|34.1667|-0.38%|
> |TwoInputs_KeyBy_LazyFromSource_ExactlyOnce_100_heap|158.353|151.848|-4.11%|
> |TwoInputs_Broadcast_Eager_ExactlyOnce_1024_rocksdb|1373.406667|1300.056667|-5.34%|
> |TwoInputs_Rescale_Eager_ExactlyOnce_10_heap|34.5717|32.0967|-7.16%|
> |TwoInputs_Rebalance_Eager_ExactlyOnce_100_rocksdb|158.655|147.44|-7.07%|
> |TwoInputs_KeyBy_Eager_ExactlyOnce_1024_heap|1356.611667|1292.386667|-4.73%|
> |TwoInputs_Broadcast_LazyFromSource_AtLeastOnce_10_rocksdb|34.01|33.205|-2.37%|
> |TwoInputs_Rescale_LazyFromSource_AtLeastOnce_100_heap|149.588|145.997|-2.40%|
> |TwoInputs_Rebalance_LazyFromSource_AtLeastOnce_1024_rocksdb|1359.74|1299.156667|-4.46%|
> |TwoInputs_KeyBy_LazyFromSource_AtLeastOnce_10_heap|34.025|29.6833|-12.76%|
> |TwoInputs_Broadcast_Eager_AtLeastOnce_100_rocksdb|157.303|151.4616667|-3.71%|
> |TwoInputs_Rescale_Eager_AtLeastOnce_1024_heap|1368.74|1293.238333|-5.52%|
> |TwoInputs_Rebalance_Eager_AtLeastOnce_10_rocksdb|34.325|33.285|-3.03%|
> |TwoInputs_KeyBy_Eager_AtLeastOnce_100_heap|162.5116667|134.375|-17.31%|
> It can be seen that the performance of 1.11 has a regression, basically 
> around 5%, and the maximum regression is 17%. This needs to be checked.
> the test code:
> flink-1.10.0: 
> 

[jira] [Commented] (FLINK-18427) Job failed under java 11

2020-06-28 Thread Zhang Hao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147497#comment-17147497
 ] 

Zhang Hao commented on FLINK-18427:
---

[~sewen], according to your suggestion, I adjuested the off-heap setting, job 
run normally.Thank you.

In addition, I have two questions, .
 # What's the difference between 'taskmanager.memory.task.off-heap.size' and 
'taskmanager.memory.framework.off-heap.size'? I found that either 'task' or 
'framework' setting is ok for my job, which one should I use?.Flink doc says 
that 'framework' is advanced option, I also want to know that flink how to use 
this two options.
 # If I use either 'framewok' or 'task', I want to know how big should I set? 
How to monitor available off-heap size according to webUI or any tools else?

> Job failed under java 11
> 
>
> Key: FLINK-18427
> URL: https://issues.apache.org/jira/browse/FLINK-18427
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Configuration, Runtime / Network
>Affects Versions: 1.10.0
>Reporter: Zhang Hao
>Priority: Critical
>
> flink version:1.10.0
> deployment mode:cluster
> os:linux redhat7.5
> Job parallelism:greater than 1
> My job run normally under java 8, but failed under java 11.Excpetion info 
> like below,netty send message failed.In addition, I found job would failed 
> when task was distributed on multi node, if I set job's parallelism = 1, job 
> run normally under java 11 too.
>  
> 2020-06-24 09:52:162020-06-24 
> 09:52:16org.apache.flink.runtime.io.network.netty.exception.LocalTransportException:
>  Sending the partition request to '/170.0.50.19:33320' failed. at 
> org.apache.flink.runtime.io.network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:124)
>  at 
> org.apache.flink.runtime.io.network.netty.NettyPartitionRequestClient$1.operationComplete(NettyPartitionRequestClient.java:115)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:500)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:474)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:413)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:538)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:531)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:111)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.internal.PromiseNotificationUtil.tryFailure(PromiseNotificationUtil.java:64)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.notifyOutboundHandlerException(AbstractChannelHandlerContext.java:818)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:718)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.invokeWrite(AbstractChannelHandlerContext.java:708)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext.access$1700(AbstractChannelHandlerContext.java:56)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.write(AbstractChannelHandlerContext.java:1102)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext$WriteAndFlushTask.write(AbstractChannelHandlerContext.java:1149)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.AbstractChannelHandlerContext$AbstractWriteTask.run(AbstractChannelHandlerContext.java:1073)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:416)
>  at 
> org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:515)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:918)
>  at 
> org.apache.flink.shaded.netty4.io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
>  at java.base/java.lang.Thread.run(Thread.java:834)Caused by: 
> java.io.IOException: Error while serializing message: 
> PartitionRequest(8059a0b47f7ba0ff814ea52427c584e7@6750c1170c861176ad3ceefe9b02f36e:0:2)
>  at 
> org.apache.flink.runtime.io.network.netty.NettyMessage$NettyMessageEncoder.write(NettyMessage.java:177)
>  at 
> 

[jira] [Commented] (FLINK-18443) The operator name select: (ip, ts, count, environment.access AS access, environment.brand AS brand, sid, params.adid AS adid, eventid) exceeded the 80 characters lengt

2020-06-28 Thread mzz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147494#comment-17147494
 ] 

mzz commented on FLINK-18443:
-

I tried to read the source code,But it didn't help me。Because of the parameter 
METRICS_OPERATOR_NAME_MAX_LENGTH  is final。

{code:java}
static final int METRICS_OPERATOR_NAME_MAX_LENGTH = 80;
public OperatorMetricGroup getOrAddOperator(OperatorID operatorID, String name) 
{
if (name != null && name.length() > 
METRICS_OPERATOR_NAME_MAX_LENGTH) {
LOG.warn("The operator name {} exceeded the {} 
characters length limit and was truncated.", name, 
METRICS_OPERATOR_NAME_MAX_LENGTH);
name = name.substring(0, 
METRICS_OPERATOR_NAME_MAX_LENGTH);
}
OperatorMetricGroup operator = new 
OperatorMetricGroup(this.registry, this, operatorID, name);
// unique OperatorIDs only exist in streaming, so we have to 
rely on the name for batch operators
final String key = operatorID + name;

synchronized (this) {
OperatorMetricGroup previous = operators.put(key, 
operator);
if (previous == null) {
// no operator group so far
return operator;
} else {
// already had an operator group. restore that 
one.
operators.put(key, previous);
return previous;
}
}
}

{code}



> The operator name select: (ip, ts, count, environment.access AS access, 
> environment.brand AS brand, sid, params.adid AS adid, eventid) exceeded the 
> 80 characters length limit and was truncated
> 
>
> Key: FLINK-18443
> URL: https://issues.apache.org/jira/browse/FLINK-18443
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.9.0
>Reporter: mzz
>Priority: Major
>
> *Schema:*
> {code:java}
> .withSchema(new Schema()
>  .field("ip", Types.STRING())
>  .field("ts", Types.STRING())
>  .field("procTime", Types.SQL_TIMESTAMP).proctime()
>  .field("environment", schemaEnvironment)
>  .field("advs", ObjectArrayTypeInfo.getInfoFor(new Array[Row](0).getClass, 
>   Types.ROW(Array("count","sid", "eventid","params"), 
>   Array[TypeInformation[_]](Types.STRING(),Types.STRING(), 
>   
> Types.STRING(),Types.ROW(Array("adid","adtype","ecpm"),Array[TypeInformation[_]]
>  (Types.STRING(),Types.STRING(),Types.STRING()))
> )
> {code}
> *when execute this sql*:
> {code:java}
> val sql =
>   """
> |SELECT
> |ip,
> |ts,
> |params.ad,
> |params.adtype,
> |eventid,
> |procTime
> |FROM aggs_test
> |CROSS JOIN UNNEST(advs) AS t (`count`,sid, eventid,params)
> |""".stripMargin
> {code}
> *I got a warning,and the console keeps brushing this warning,no normal 
> printout*
> {code:java}
> 09:38:38,694 WARN  org.apache.flink.metrics.MetricGroup   
>- The operator name correlate: table(explode($cor0.advs)), select: ip, ts, 
> procTime, advs, sid, eventid, params exceeded the 80 characters length limit 
> and was truncated.
> {code}
> *But after I change it to this way, although I occasionally brush this Warn, 
> it can be output normally。I change the 'params' type from Types.ROW to 
> Types.STRING*。
> {code:java}
>  .field("advs", ObjectArrayTypeInfo.getInfoFor(new Array[Row](0).getClass, 
> Types.ROW(Array("count", "sid", "eventid", "params"),
>   Array[TypeInformation[_]](Types.STRING(), Types.STRING(), 
> Types.STRING(), Types.STRING()
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17424) SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to download error

2020-06-28 Thread Dian Fu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dian Fu updated FLINK-17424:

Affects Version/s: 1.12.0

> SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to 
> download error
> 
>
> Key: FLINK-17424
> URL: https://issues.apache.org/jira/browse/FLINK-17424
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.11.0, 1.12.0
>Reporter: Yu Li
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> `SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1)` failed in 
> release-1.10 crone job with below error:
> {noformat}
> Preparing Elasticsearch(version=7)...
> Downloading Elasticsearch from 
> https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.5.1-linux-x86_64.tar.gz
>  ...
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
>   4  276M4 13.3M0 0  28.8M  0  0:00:09 --:--:--  0:00:09 28.8M
>  42  276M   42  117M0 0  80.7M  0  0:00:03  0:00:01  0:00:02 80.7M
>  70  276M   70  196M0 0  79.9M  0  0:00:03  0:00:02  0:00:01 79.9M
>  89  276M   89  248M0 0  82.3M  0  0:00:03  0:00:03 --:--:-- 82.4M
> curl: (56) GnuTLS recv error (-54): Error in the pull function.
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
> 0curl: (7) Failed to connect to localhost port 9200: Connection refused
> [FAIL] Test script contains errors.
> {noformat}
> https://api.travis-ci.org/v3/job/680222168/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-17424) SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to download error

2020-06-28 Thread Dian Fu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147493#comment-17147493
 ] 

Dian Fu commented on FLINK-17424:
-

Another instance on the master branch: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=4092=logs=9fca669f-5c5f-59c7-4118-e31c641064f0=91bf6583-3fb2-592f-e4d4-d79d79c3230a

> SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1) failed due to 
> download error
> 
>
> Key: FLINK-17424
> URL: https://issues.apache.org/jira/browse/FLINK-17424
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / ElasticSearch, Tests
>Affects Versions: 1.11.0
>Reporter: Yu Li
>Priority: Major
>  Labels: test-stability
> Fix For: 1.11.0
>
>
> `SQL Client end-to-end test (Old planner) Elasticsearch (v7.5.1)` failed in 
> release-1.10 crone job with below error:
> {noformat}
> Preparing Elasticsearch(version=7)...
> Downloading Elasticsearch from 
> https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.5.1-linux-x86_64.tar.gz
>  ...
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 0
>   4  276M4 13.3M0 0  28.8M  0  0:00:09 --:--:--  0:00:09 28.8M
>  42  276M   42  117M0 0  80.7M  0  0:00:03  0:00:01  0:00:02 80.7M
>  70  276M   70  196M0 0  79.9M  0  0:00:03  0:00:02  0:00:01 79.9M
>  89  276M   89  248M0 0  82.3M  0  0:00:03  0:00:03 --:--:-- 82.4M
> curl: (56) GnuTLS recv error (-54): Error in the pull function.
>   % Total% Received % Xferd  Average Speed   TimeTime Time  
> Current
>  Dload  Upload   Total   SpentLeft  Speed
>   0 00 00 0  0  0 --:--:-- --:--:-- --:--:-- 
> 0curl: (7) Failed to connect to localhost port 9200: Connection refused
> [FAIL] Test script contains errors.
> {noformat}
> https://api.travis-ci.org/v3/job/680222168/log.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18437) org.apache.calcite.sql.validate.SqlValidatorException: List of column aliases must have same degree as table

2020-06-28 Thread mzz (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147492#comment-17147492
 ] 

mzz commented on FLINK-18437:
-

*I tried to read the source code,but it didn't help me。beacuse of the 
METRICS_OPERATOR_NAME_MAX_LENGTH is final*

{code:java}
static final int METRICS_OPERATOR_NAME_MAX_LENGTH = 80;

public OperatorMetricGroup getOrAddOperator(OperatorID operatorID, String name) 
{
if (name != null && name.length() > 
METRICS_OPERATOR_NAME_MAX_LENGTH) {
LOG.warn("The operator name {} exceeded the {} 
characters length limit and was truncated.", name, 
METRICS_OPERATOR_NAME_MAX_LENGTH);
name = name.substring(0, 
METRICS_OPERATOR_NAME_MAX_LENGTH);
}
OperatorMetricGroup operator = new 
OperatorMetricGroup(this.registry, this, operatorID, name);
// unique OperatorIDs only exist in streaming, so we have to 
rely on the name for batch operators
final String key = operatorID + name;

synchronized (this) {
OperatorMetricGroup previous = operators.put(key, 
operator);
if (previous == null) {
// no operator group so far
return operator;
} else {
// already had an operator group. restore that 
one.
operators.put(key, previous);
return previous;
}
}
}
{code}


> org.apache.calcite.sql.validate.SqlValidatorException: List of column aliases 
> must have same degree as table
> 
>
> Key: FLINK-18437
> URL: https://issues.apache.org/jira/browse/FLINK-18437
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.9.3
>Reporter: mzz
>Priority: Critical
>
>  .withSchema(new Schema()
> .field("ip", Types.STRING())
> .field("ts", Types.STRING())
> .field("environment", Types.ROW(Array("access", "brand"), 
> Array[TypeInformation[_]](Types.STRING(), Types.STRING)))
> .field("advs", ObjectArrayTypeInfo.getInfoFor(new 
> Array[Row](0).getClass, Types.ROW(Array("count", "eventid"), 
> Array[TypeInformation[_]](Types.STRING(), Types.STRING
>   )
>   .inAppendMode()
>   .registerTableSource("aggs_test")
> The code above is dataSchema,i tried this way  
> [https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/sql/queries.html|http://example.com],but
>   when i execute this sql: 
> val sql1 =
>   """
> |SELECT
> |ip,
> |ts,
> |environment,
> |adv
> |FROM aggs_test
> |CROSS JOIN UNNEST(advs) AS t (adv)
> |""".stripMargin
> It report an error:
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 8, column 31 to line 8, column 33: List of 
> column aliases must have same degree as table; table has 2 columns ('access', 
> 'brand'), whereas alias list has 1 columns
>   at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:128)
>   at 
> org.apache.flink.table.sqlexec.SqlToOperationConverter.convert(SqlToOperationConverter.java:83)
>   at 
> org.apache.flink.table.planner.StreamPlanner.parse(StreamPlanner.scala:115)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:298)
>   at 
> QM.COM.Flink.KafakaHelper.FlinkTableConnKafka$.main(FlinkTableConnKafka.scala:88)
>   at 
> QM.COM.Flink.KafakaHelper.FlinkTableConnKafka.main(FlinkTableConnKafka.scala)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 8, 
> column 31 to line 8, column 33: List of column aliases must have same degree 
> as table; table has 2 columns ('access', 'brand'), whereas alias list has 1 
> columns
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:463)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:824)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:809)
>   at 
> 

[jira] [Created] (FLINK-18443) The operator name select: (ip, ts, count, environment.access AS access, environment.brand AS brand, sid, params.adid AS adid, eventid) exceeded the 80 characters length

2020-06-28 Thread mzz (Jira)
mzz created FLINK-18443:
---

 Summary: The operator name select: (ip, ts, count, 
environment.access AS access, environment.brand AS brand, sid, params.adid AS 
adid, eventid) exceeded the 80 characters length limit and was truncated
 Key: FLINK-18443
 URL: https://issues.apache.org/jira/browse/FLINK-18443
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.9.0
Reporter: mzz


*Schema:*

{code:java}
.withSchema(new Schema()
 .field("ip", Types.STRING())
 .field("ts", Types.STRING())
 .field("procTime", Types.SQL_TIMESTAMP).proctime()
 .field("environment", schemaEnvironment)
 .field("advs", ObjectArrayTypeInfo.getInfoFor(new Array[Row](0).getClass, 
  Types.ROW(Array("count","sid", "eventid","params"), 
  Array[TypeInformation[_]](Types.STRING(),Types.STRING(), 
  
Types.STRING(),Types.ROW(Array("adid","adtype","ecpm"),Array[TypeInformation[_]]
 (Types.STRING(),Types.STRING(),Types.STRING()))
)
{code}

*when execute this sql*:

{code:java}
val sql =
  """
|SELECT
|ip,
|ts,
|params.ad,
|params.adtype,
|eventid,
|procTime
|FROM aggs_test
|CROSS JOIN UNNEST(advs) AS t (`count`,sid, eventid,params)
|""".stripMargin
{code}

*I got a warning,and the console keeps brushing this warning,no normal printout*

{code:java}
09:38:38,694 WARN  org.apache.flink.metrics.MetricGroup 
 - The operator name correlate: table(explode($cor0.advs)), select: ip, ts, 
procTime, advs, sid, eventid, params exceeded the 80 characters length limit 
and was truncated.

{code}

*But after I change it to this way, although I occasionally brush this Warn, it 
can be output normally。I change the 'params' type from Types.ROW to 
Types.STRING*。

{code:java}
 .field("advs", ObjectArrayTypeInfo.getInfoFor(new Array[Row](0).getClass, 
Types.ROW(Array("count", "sid", "eventid", "params"),
  Array[TypeInformation[_]](Types.STRING(), Types.STRING(), 
Types.STRING(), Types.STRING()
{code}













--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18437) org.apache.calcite.sql.validate.SqlValidatorException: List of column aliases must have same degree as table

2020-06-28 Thread mzz (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mzz updated FLINK-18437:

Summary: org.apache.calcite.sql.validate.SqlValidatorException: List of 
column aliases must have same degree as table  (was: Error message is not 
correct when using UNNEST)

> org.apache.calcite.sql.validate.SqlValidatorException: List of column aliases 
> must have same degree as table
> 
>
> Key: FLINK-18437
> URL: https://issues.apache.org/jira/browse/FLINK-18437
> Project: Flink
>  Issue Type: Bug
>  Components: API / DataStream
>Affects Versions: 1.9.3
>Reporter: mzz
>Priority: Critical
>
>  .withSchema(new Schema()
> .field("ip", Types.STRING())
> .field("ts", Types.STRING())
> .field("environment", Types.ROW(Array("access", "brand"), 
> Array[TypeInformation[_]](Types.STRING(), Types.STRING)))
> .field("advs", ObjectArrayTypeInfo.getInfoFor(new 
> Array[Row](0).getClass, Types.ROW(Array("count", "eventid"), 
> Array[TypeInformation[_]](Types.STRING(), Types.STRING
>   )
>   .inAppendMode()
>   .registerTableSource("aggs_test")
> The code above is dataSchema,i tried this way  
> [https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/sql/queries.html|http://example.com],but
>   when i execute this sql: 
> val sql1 =
>   """
> |SELECT
> |ip,
> |ts,
> |environment,
> |adv
> |FROM aggs_test
> |CROSS JOIN UNNEST(advs) AS t (adv)
> |""".stripMargin
> It report an error:
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 8, column 31 to line 8, column 33: List of 
> column aliases must have same degree as table; table has 2 columns ('access', 
> 'brand'), whereas alias list has 1 columns
>   at 
> org.apache.flink.table.calcite.FlinkPlannerImpl.validate(FlinkPlannerImpl.scala:128)
>   at 
> org.apache.flink.table.sqlexec.SqlToOperationConverter.convert(SqlToOperationConverter.java:83)
>   at 
> org.apache.flink.table.planner.StreamPlanner.parse(StreamPlanner.scala:115)
>   at 
> org.apache.flink.table.api.internal.TableEnvironmentImpl.sqlQuery(TableEnvironmentImpl.java:298)
>   at 
> QM.COM.Flink.KafakaHelper.FlinkTableConnKafka$.main(FlinkTableConnKafka.scala:88)
>   at 
> QM.COM.Flink.KafakaHelper.FlinkTableConnKafka.main(FlinkTableConnKafka.scala)
> Caused by: org.apache.calcite.runtime.CalciteContextException: From line 8, 
> column 31 to line 8, column 33: List of column aliases must have same degree 
> as table; table has 2 columns ('access', 'brand'), whereas alias list has 1 
> columns
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at 
> org.apache.calcite.runtime.Resources$ExInstWithCause.ex(Resources.java:463)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:824)
>   at org.apache.calcite.sql.SqlUtil.newContextException(SqlUtil.java:809)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.newValidationError(SqlValidatorImpl.java:4807)
>   at 
> org.apache.calcite.sql.validate.AliasNamespace.validateImpl(AliasNamespace.java:86)
>   at 
> org.apache.calcite.sql.validate.AbstractNamespace.validate(AbstractNamespace.java:84)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.validateNamespace(SqlValidatorImpl.java:997)
>   at 
> org.apache.calcite.sql.validate.AbstractNamespace.getRowType(AbstractNamespace.java:115)
>   at 
> org.apache.calcite.sql.validate.AliasNamespace.getRowType(AliasNamespace.java:41)
>   at 
> org.apache.calcite.sql.validate.DelegatingScope.resolveInNamespace(DelegatingScope.java:101)
>   at org.apache.calcite.sql.validate.ListScope.resolve(ListScope.java:191)
>   at 
> org.apache.calcite.sql.validate.ListScope.findQualifyingTableNames(ListScope.java:156)
>   at 
> org.apache.calcite.sql.validate.DelegatingScope.fullyQualify(DelegatingScope.java:238)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$Expander.visit(SqlValidatorImpl.java:5699)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl$Expander.visit(SqlValidatorImpl.java:5684)
>   at org.apache.calcite.sql.SqlIdentifier.accept(SqlIdentifier.java:317)
>   at 
> org.apache.calcite.sql.validate.SqlValidatorImpl.expand(SqlValidatorImpl.java:5291)
>   at 
> 

[jira] [Commented] (FLINK-18164) null <> 'str' should be true

2020-06-28 Thread Leonard Xu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147488#comment-17147488
 ] 

Leonard Xu commented on FLINK-18164:


Hi, [~libenchao] 
I agree your thought that current behavior is correct, I think it doesn't 
matter to importing configs from Calcite  or not. And describing current 
implementation design in this issue will be better from my side.

My point is that we have answered two users' question in mail list which 
referenced this issue, if we closed this issue, maybe we need update our 
answer, I'll update our answer and cc you in mail list.

> null <> 'str' should be true
> 
>
> Key: FLINK-18164
> URL: https://issues.apache.org/jira/browse/FLINK-18164
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Benchao Li
>Priority: Major
>
> Currently, if we compare null with other literals, the result will always be 
> false.
> It's because the code gen always gives a default value (false) for the 
> result. And I think it's a bug if `null <> 'str'` is false.
> It's reported from user-zh: 
> http://apache-flink.147419.n8.nabble.com/flink-sql-null-false-td3640.html
> CC [~jark] [~ykt836]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-10672) Task stuck while writing output to flink

2020-06-28 Thread Yun Gao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147486#comment-17147486
 ] 

Yun Gao commented on FLINK-10672:
-

[~lipeidian],  very thanks for the reporting, are you also using Beam on Flink 
? could you also attach detailed program and exceptions ? Thanks a lot.

> Task stuck while writing output to flink
> 
>
> Key: FLINK-10672
> URL: https://issues.apache.org/jira/browse/FLINK-10672
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.5.4
> Environment: OS: Debuan rodente 4.17
> Flink version: 1.5.4
> ||Key||Value||
> |jobmanager.heap.mb|1024|
> |jobmanager.rpc.address|localhost|
> |jobmanager.rpc.port|6123|
> |metrics.reporter.jmx.class|org.apache.flink.metrics.jmx.JMXReporter|
> |metrics.reporter.jmx.port|9250-9260|
> |metrics.reporters|jmx|
> |parallelism.default|1|
> |rest.port|8081|
> |taskmanager.heap.mb|1024|
> |taskmanager.numberOfTaskSlots|1|
> |web.tmpdir|/tmp/flink-web-bdb73d6c-5b9e-47b5-9ebf-eed0a7c82c26|
>  
> h1. Overview
> ||Data Port||All Slots||Free Slots||CPU Cores||Physical Memory||JVM Heap 
> Size||Flink Managed Memory||
> |43501|1|0|12|62.9 GB|922 MB|642 MB|
> h1. Memory
> h2. JVM (Heap/Non-Heap)
> ||Type||Committed||Used||Maximum||
> |Heap|922 MB|575 MB|922 MB|
> |Non-Heap|68.8 MB|64.3 MB|-1 B|
> |Total|991 MB|639 MB|922 MB|
> h2. Outside JVM
> ||Type||Count||Used||Capacity||
> |Direct|3,292|105 MB|105 MB|
> |Mapped|0|0 B|0 B|
> h1. Network
> h2. Memory Segments
> ||Type||Count||
> |Available|3,194|
> |Total|3,278|
> h1. Garbage Collection
> ||Collector||Count||Time||
> |G1_Young_Generation|13|336|
> |G1_Old_Generation|1|21|
>Reporter: Ankur Goenka
>Assignee: Yun Gao
>Priority: Major
>  Labels: beam
> Attachments: 0.14_all_jobs.jpg, 1uruvakHxBu.png, 3aDKQ24WvKk.png, 
> Po89UGDn58V.png, WithBroadcastJob.png, jmx_dump.json, jmx_dump_detailed.json, 
> jstack_129827.log, jstack_163822.log, jstack_66985.log
>
>
> I am running a fairly complex pipleline with 200+ task.
> The pipeline works fine with small data (order of 10kb input) but gets stuck 
> with a slightly larger data (300kb input).
>  
> The task gets stuck while writing the output toFlink, more specifically it 
> gets stuck while requesting memory segment in local buffer pool. The Task 
> manager UI shows that it has enough memory and memory segments to work with.
> The relevant stack trace is 
> {quote}"grpc-default-executor-0" #138 daemon prio=5 os_prio=0 
> tid=0x7fedb0163800 nid=0x30b7f in Object.wait() [0x7fedb4f9]
>  java.lang.Thread.State: TIMED_WAITING (on object monitor)
>  at (C/C++) 0x7fef201c7dae (Unknown Source)
>  at (C/C++) 0x7fef1f2aea07 (Unknown Source)
>  at (C/C++) 0x7fef1f241cd3 (Unknown Source)
>  at java.lang.Object.wait(Native Method)
>  - waiting on <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:247)
>  - locked <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:204)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:213)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:144)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:107)
>  at 
> org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65)
>  at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:42)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:26)
>  at 
> org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.collect(ChainedFlatMapDriver.java:80)
>  at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction$MyDataReceiver.accept(FlinkExecutableStageFunction.java:230)
>  - locked <0xf6a60bd0> (a java.lang.Object)
>  at 
> org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.accept(BeamFnDataInboundObserver.java:81)
>  at 
> org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.accept(BeamFnDataInboundObserver.java:32)
>  at 
> org.apache.beam.sdk.fn.data.BeamFnDataGrpcMultiplexer$InboundObserver.onNext(BeamFnDataGrpcMultiplexer.java:139)
>  at 
> 

[GitHub] [flink] flinkbot edited a comment on pull request #12784: [FLINK-18373][statebackend] Drop useless RocksDB performance unit tests

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12784:
URL: https://github.com/apache/flink/pull/12784#issuecomment-650796879


   
   ## CI report:
   
   * d02ea0cfe658edd49c94445a52457232029e8575 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4091)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12783: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12783:
URL: https://github.com/apache/flink/pull/12783#issuecomment-650771390


   
   ## CI report:
   
   * b21bb432e55ff32fb9d929b8239ba369339c3963 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4090)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18433) From the end-to-end performance test results, 1.11 has a regression

2020-06-28 Thread Arvid Heise (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147409#comment-17147409
 ] 

Arvid Heise commented on FLINK-18433:
-

[~trohrmann], I used the local executor with explicit Xmx configuration, so I'm 
bypassing all the TM/JM memory setup code. In the end, most values should be 
default values.
{noformat}
2020-06-26 14:53:07,199 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
configuration option Key: 'taskmanager.cpu.cores' , default: null (fallback 
keys: []) required for local execution is not set, setting it to its default 
value 1.7976931348623157E308
2020-06-26 14:53:07,201 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
configuration option Key: 'taskmanager.memory.task.heap.size' , default: null 
(fallback keys: []) required for local execution is not set, setting it to its 
default value 9223372036854775807 bytes
2020-06-26 14:53:07,201 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
configuration option Key: 'taskmanager.memory.task.off-heap.size' , default: 0 
bytes (fallback keys: []) required for local execution is not set, setting it 
to its default value 9223372036854775807 bytes
2020-06-26 14:53:07,202 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
configuration option Key: 'taskmanager.memory.network.min' , default: 64 mb 
(fallback keys: [{key=taskmanager.network.memory.min, isDeprecated=true}]) 
required for local execution is not set, setting it to its default value 64 mb
2020-06-26 14:53:07,202 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
configuration option Key: 'taskmanager.memory.network.max' , default: 1 gb 
(fallback keys: [{key=taskmanager.network.memory.max, isDeprecated=true}]) 
required for local execution is not set, setting it to its default value 64 mb
2020-06-26 14:53:07,202 INFO  
org.apache.flink.runtime.taskexecutor.TaskExecutorResourceUtils [] - The 
configuration option Key: 'taskmanager.memory.managed.size' , default: null 
(fallback keys: [{key=taskmanager.memory.size, isDeprecated=true}]) required 
for local execution is not set, setting it to its default value 128 mb{noformat}
Anyone knows how the TPS metric is calculated? Would slower deployment affect 
it? Or is it only the record/s for the last second? [~Aihua] could you publish 
the raw measurements? I'd like to see the spread and maybe the timeline will 
also help us.

We can exclude the weird cancellation behavior though (should still be 
investigated) as it seems [~Aihua] did not cancel the job before taking the 
metric.

> From the end-to-end performance test results, 1.11 has a regression
> ---
>
> Key: FLINK-18433
> URL: https://issues.apache.org/jira/browse/FLINK-18433
> Project: Flink
>  Issue Type: Bug
>  Components: API / Core, API / DataStream
>Affects Versions: 1.11.0
> Environment: 3 machines
> [|https://github.com/Li-Aihua/flink/blob/test_suite_for_basic_operations_1.11/flink-end-to-end-perf-tests/flink-basic-operations/src/main/java/org/apache/flink/basic/operations/PerformanceTestJob.java]
>Reporter: Aihua Li
>Priority: Major
> Attachments: flink_11.log.gz
>
>
>  
> I ran end-to-end performance tests between the Release-1.10 and Release-1.11. 
> the results were as follows:
> |scenarioName|release-1.10|release-1.11| |
> |OneInput_Broadcast_LazyFromSource_ExactlyOnce_10_rocksdb|46.175|43.8133|-5.11%|
> |OneInput_Rescale_LazyFromSource_ExactlyOnce_100_heap|211.835|200.355|-5.42%|
> |OneInput_Rebalance_LazyFromSource_ExactlyOnce_1024_rocksdb|1721.041667|1618.32|-5.97%|
> |OneInput_KeyBy_LazyFromSource_ExactlyOnce_10_heap|46|43.615|-5.18%|
> |OneInput_Broadcast_Eager_ExactlyOnce_100_rocksdb|212.105|199.688|-5.85%|
> |OneInput_Rescale_Eager_ExactlyOnce_1024_heap|1754.64|1600.12|-8.81%|
> |OneInput_Rebalance_Eager_ExactlyOnce_10_rocksdb|45.9167|43.0983|-6.14%|
> |OneInput_KeyBy_Eager_ExactlyOnce_100_heap|212.0816667|200.727|-5.35%|
> |OneInput_Broadcast_LazyFromSource_AtLeastOnce_1024_rocksdb|1718.245|1614.381667|-6.04%|
> |OneInput_Rescale_LazyFromSource_AtLeastOnce_10_heap|46.12|43.5517|-5.57%|
> |OneInput_Rebalance_LazyFromSource_AtLeastOnce_100_rocksdb|212.038|200.388|-5.49%|
> |OneInput_KeyBy_LazyFromSource_AtLeastOnce_1024_heap|1762.048333|1606.408333|-8.83%|
> |OneInput_Broadcast_Eager_AtLeastOnce_10_rocksdb|46.0583|43.4967|-5.56%|
> |OneInput_Rescale_Eager_AtLeastOnce_100_heap|212.233|201.188|-5.20%|
> |OneInput_Rebalance_Eager_AtLeastOnce_1024_rocksdb|1720.66|1616.85|-6.03%|
> |OneInput_KeyBy_Eager_AtLeastOnce_10_heap|46.14|43.6233|-5.45%|
> 

[GitHub] [flink] flinkbot edited a comment on pull request #12782: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12782:
URL: https://github.com/apache/flink/pull/12782#issuecomment-650767887


   
   ## CI report:
   
   * bb22d5c079fe371b3d003fe9f33c958319304fe3 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4089)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12784: [FLINK-18373][statebackend] Drop useless RocksDB performance unit tests

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12784:
URL: https://github.com/apache/flink/pull/12784#issuecomment-650796879


   
   ## CI report:
   
   * d02ea0cfe658edd49c94445a52457232029e8575 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4091)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12784: [FLINK-18373][statebackend] Drop useless RocksDB performance unit tests

2020-06-28 Thread GitBox


flinkbot commented on pull request #12784:
URL: https://github.com/apache/flink/pull/12784#issuecomment-650796879


   
   ## CI report:
   
   * d02ea0cfe658edd49c94445a52457232029e8575 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12784: [FLINK-18373][statebackend] Drop useless RocksDB performance unit tests

2020-06-28 Thread GitBox


flinkbot commented on pull request #12784:
URL: https://github.com/apache/flink/pull/12784#issuecomment-650795572


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit d02ea0cfe658edd49c94445a52457232029e8575 (Sun Jun 28 
17:14:34 UTC 2020)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18373) Drop performance unit tests in RocksDB state backend module

2020-06-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18373:
---
Labels: pull-request-available  (was: )

> Drop performance unit tests in RocksDB state backend module
> ---
>
> Key: FLINK-18373
> URL: https://issues.apache.org/jira/browse/FLINK-18373
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / State Backends, Tests
>Affects Versions: 1.12.0
>Reporter: Yun Tang
>Assignee: Yun Tang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.12.0
>
>
> As discussed in FLINK-15318, current performance unit tests in RocksDB state 
> backend module have been useless as 
> [https://github.com/apache/flink-benchmarks] take the role to watch 
> performance.
> Moreover, due to the unstable CI host environment, those unit tests might 
> also break the whole CI phase sometimes.
> Thus, it's time to drop those performance unit tests.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] Myasuka opened a new pull request #12784: [FLINK-18373][statebackend] Drop useless RocksDB performance unit tests

2020-06-28 Thread GitBox


Myasuka opened a new pull request #12784:
URL: https://github.com/apache/flink/pull/12784


   ## What is the purpose of the change
   
   Drop useless RocksDB performance unit tests
   
   ## Brief change log
   
   Drop `RocksDBListStatePerformanceTest.java`, `RocksDBPerformanceTest.java` 
and `RocksDBWriteBatchPerformanceTest.java`
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? not applicable
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12056:
URL: https://github.com/apache/flink/pull/12056#issuecomment-626167900


   
   ## CI report:
   
   * c42ef1f62396948ece865a3b8628bea229d5ffb9 UNKNOWN
   * fb66f2e5d16daeabafaa62eab9493112997b9f74 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4088)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12783: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12783:
URL: https://github.com/apache/flink/pull/12783#issuecomment-650771390


   
   ## CI report:
   
   * b21bb432e55ff32fb9d929b8239ba369339c3963 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4090)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12782: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12782:
URL: https://github.com/apache/flink/pull/12782#issuecomment-650767887


   
   ## CI report:
   
   * bb22d5c079fe371b3d003fe9f33c958319304fe3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4089)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12783: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


flinkbot commented on pull request #12783:
URL: https://github.com/apache/flink/pull/12783#issuecomment-650771390


   
   ## CI report:
   
   * b21bb432e55ff32fb9d929b8239ba369339c3963 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18202) Introduce Protobuf format

2020-06-28 Thread liufangliang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147354#comment-17147354
 ] 

liufangliang commented on FLINK-18202:
--

For reference

[https://calcite.apache.org/avatica/docs/protobuf_reference.html] 

> Introduce Protobuf format
> -
>
> Key: FLINK-18202
> URL: https://issues.apache.org/jira/browse/FLINK-18202
> Project: Flink
>  Issue Type: New Feature
>  Components: Formats (JSON, Avro, Parquet, ORC, SequenceFile), Table 
> SQL / API
>Reporter: Benchao Li
>Priority: Major
> Attachments: image-2020-06-15-17-18-03-182.png
>
>
> PB[1] is a very famous and wildly used (de)serialization framework. The ML[2] 
> also has some discussions about this. It's a useful feature.
> This issue maybe needs some designs, or a FLIP.
> [1] [https://developers.google.com/protocol-buffers]
> [2] [http://apache-flink.147419.n8.nabble.com/Flink-SQL-UDF-td3725.html]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot commented on pull request #12782: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


flinkbot commented on pull request #12782:
URL: https://github.com/apache/flink/pull/12782#issuecomment-650767887


   
   ## CI report:
   
   * bb22d5c079fe371b3d003fe9f33c958319304fe3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12781: [FLINK-18376][Table SQL/Runtime]Fix java.lang.ArrayIndexOutOfBoundsException in RetractableTopNFunction

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12781:
URL: https://github.com/apache/flink/pull/12781#issuecomment-650728077


   
   ## CI report:
   
   * 403cb8c2d677a1eb55e6315f2f79cfd002b30e6e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4087)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12783: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


flinkbot commented on pull request #12783:
URL: https://github.com/apache/flink/pull/12783#issuecomment-65072


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit b21bb432e55ff32fb9d929b8239ba369339c3963 (Sun Jun 28 
14:14:55 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] leonardBang opened a new pull request #12783: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


leonardBang opened a new pull request #12783:
URL: https://github.com/apache/flink/pull/12783


   ## What is the purpose of the change
   
   * This pull request fix sql client jar url in docs.
   
   
   ## Brief change log
   
   - the sql client jar url should 
be:https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch6
 ...
   but current is 
:https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-elasticsearch6...
   including Kafka/Elasticsearch SQL connector
   
   
   ## Verifying this change
   
   This change is a document fix without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12782: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


flinkbot commented on pull request #12782:
URL: https://github.com/apache/flink/pull/12782#issuecomment-650765842


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit bb22d5c079fe371b3d003fe9f33c958319304fe3 (Sun Jun 28 
14:10:13 UTC 2020)
   
✅no warnings
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-18439) Update sql client jar url in docs

2020-06-28 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-18439:
---
Labels: pull-request-available  (was: )

> Update sql client jar url in docs
> -
>
> Key: FLINK-18439
> URL: https://issues.apache.org/jira/browse/FLINK-18439
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.11.0
>Reporter: Leonard Xu
>Assignee: Leonard Xu
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> the sql client jar url should be:
> [https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch6]
>  ...
> but current is :
> [https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-elasticsearch6...]
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] leonardBang opened a new pull request #12782: [FLINK-18439][docs] Update sql client jar url in docs

2020-06-28 Thread GitBox


leonardBang opened a new pull request #12782:
URL: https://github.com/apache/flink/pull/12782


   ## What is the purpose of the change
   
   * This pull request fix sql client jar url in docs.
   
   
   ## Brief change log
   
   - the sql client jar url should 
be:https://repo.maven.apache.org/maven2/org/apache/flink/flink-sql-connector-elasticsearch6
 ...
   but current is 
:https://repo.maven.apache.org/maven2/org/apache/flink/flink-connector-elasticsearch6...
   including Kafka/Elasticsearch SQL connector
   
   - Add specification for using HBase connector
   
   ## Verifying this change
   
   This change is a document fix without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): (no)
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: (no)
 - The serializers: (no)
 - The runtime per-record code paths (performance sensitive): ( no)
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Yarn/Mesos, ZooKeeper: (no)
 - The S3 file system connector: (no)
   
   ## Documentation
   
 - Does this pull request introduce a new feature? (no)
 - If yes, how is the feature documented? (not applicable / docs / JavaDocs 
/ not documented)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12779: [FLINK-17920][python][docs] Add the Python example of Time-windowed j…

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12779:
URL: https://github.com/apache/flink/pull/12779#issuecomment-650724847


   
   ## CI report:
   
   * af7a9716eeb50ba8d9dcf5ba38a79d6f053bb3cf Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4082)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18164) null <> 'str' should be true

2020-06-28 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147332#comment-17147332
 ] 

Benchao Li edited comment on FLINK-18164 at 6/28/20, 12:57 PM:
---

[~Leonard Xu] Sorry to bother you about the issue status.

I closed this issue with a 'invalid' resolution before. I thought that 
currently Flink's implementation is correct and no need to improve it.

Do you think that we need to borrow the configs from Calcite and improve 
current `EQUALS` and `NOT_EQUALS` code generation logic?
Or we should describe current implementation design in the issue description?

 


was (Author: libenchao):
[~Leonard Xu] Sorry to bother you about the issue status.

I closed this issue with a 'invalid' resolution before. I thought that 
currently Flink's implementation is correct and no need to improve it.

Do you think that we need to borrow the configs from Calcite and improve 
current `EQUALS` and `NOT_EQUALS` code generation logic?

 

> null <> 'str' should be true
> 
>
> Key: FLINK-18164
> URL: https://issues.apache.org/jira/browse/FLINK-18164
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Benchao Li
>Priority: Major
>
> Currently, if we compare null with other literals, the result will always be 
> false.
> It's because the code gen always gives a default value (false) for the 
> result. And I think it's a bug if `null <> 'str'` is false.
> It's reported from user-zh: 
> http://apache-flink.147419.n8.nabble.com/flink-sql-null-false-td3640.html
> CC [~jark] [~ykt836]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18164) null <> 'str' should be true

2020-06-28 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147332#comment-17147332
 ] 

Benchao Li commented on FLINK-18164:


[~Leonard Xu] Sorry to bother you about the issue status.

I closed this issue with a 'invalid' resolution before. I thought that 
currently Flink's implementation is correct and no need to improve it.

Do you think that we need to borrow the configs from Calcite and improve 
current `EQUALS` and `NOT_EQUALS` code generation logic?

 

> null <> 'str' should be true
> 
>
> Key: FLINK-18164
> URL: https://issues.apache.org/jira/browse/FLINK-18164
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Reporter: Benchao Li
>Priority: Major
>
> Currently, if we compare null with other literals, the result will always be 
> false.
> It's because the code gen always gives a default value (false) for the 
> result. And I think it's a bug if `null <> 'str'` is false.
> It's reported from user-zh: 
> http://apache-flink.147419.n8.nabble.com/flink-sql-null-false-td3640.html
> CC [~jark] [~ykt836]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] libenchao commented on a change in pull request #12759: [FLINK-18397][docs-zh] Translate "Table & SQL Connectors Overview" page into Chinese

2020-06-28 Thread GitBox


libenchao commented on a change in pull request #12759:
URL: https://github.com/apache/flink/pull/12759#discussion_r446643889



##
File path: docs/dev/table/connectors/index.zh.md
##
@@ -25,108 +25,108 @@ under the License.
 -->
 
 
-Flink's Table API & SQL programs can be connected to other external systems 
for reading and writing both batch and streaming tables. A table source 
provides access to data which is stored in external systems (such as a 
database, key-value store, message queue, or file system). A table sink emits a 
table to an external storage system. Depending on the type of source and sink, 
they support different formats such as CSV, Avro, Parquet, or ORC.
+Flink Table 和 SQL 可以支持对外部系统进行读和写的 table 批和流的处理。table 源端对外部数据拥有多种支持类型(例如 
数据库,键值对存储,消息队列,或者是文件系统)。table 目标端可以将表存储到另一个外部的系统中。可以支持指定类型的源端和目标端, 它们目前支持的格式有 
CSV,Avro,Parquet,ORC。

Review comment:
   `table 源端对外部数据拥有多种支持类型(例如 数据库,键值对存储,消息队列,或者是文件系统)` ->  `table source 
用于读取存储在外部系统(例如数据库、键值存储、消息队列或者文件系统)中的数据`

##
File path: docs/dev/table/connectors/index.zh.md
##
@@ -25,108 +25,108 @@ under the License.
 -->
 
 
-Flink's Table API & SQL programs can be connected to other external systems 
for reading and writing both batch and streaming tables. A table source 
provides access to data which is stored in external systems (such as a 
database, key-value store, message queue, or file system). A table sink emits a 
table to an external storage system. Depending on the type of source and sink, 
they support different formats such as CSV, Avro, Parquet, or ORC.
+Flink Table 和 SQL 可以支持对外部系统进行读和写的 table 批和流的处理。table 源端对外部数据拥有多种支持类型(例如 
数据库,键值对存储,消息队列,或者是文件系统)。table 目标端可以将表存储到另一个外部的系统中。可以支持指定类型的源端和目标端, 它们目前支持的格式有 
CSV,Avro,Parquet,ORC。

Review comment:
   感觉这句话有一点别扭:`Flink Table 和 SQL 可以支持对外部系统进行读和写的 table 批和流的处理`  
   你觉得改成这样是不是会好一点:`Flink Table 和 SQL 可以连接到外部系统进行批和流的读写` ?
   

##
File path: docs/dev/table/connectors/index.zh.md
##
@@ -25,108 +25,108 @@ under the License.
 -->
 
 
-Flink's Table API & SQL programs can be connected to other external systems 
for reading and writing both batch and streaming tables. A table source 
provides access to data which is stored in external systems (such as a 
database, key-value store, message queue, or file system). A table sink emits a 
table to an external storage system. Depending on the type of source and sink, 
they support different formats such as CSV, Avro, Parquet, or ORC.
+Flink Table 和 SQL 可以支持对外部系统进行读和写的 table 批和流的处理。table 源端对外部数据拥有多种支持类型(例如 
数据库,键值对存储,消息队列,或者是文件系统)。table 目标端可以将表存储到另一个外部的系统中。可以支持指定类型的源端和目标端, 它们目前支持的格式有 
CSV,Avro,Parquet,ORC。
 
-This page describes how to register table sources and table sinks in Flink 
using the natively supported connectors. After a source or sink has been 
registered, it can be accessed by Table API & SQL statements.
+这页主要描述的是如何使用现有flink 原生态支持连接器去注册 table 源端和 table 目标端。 在源端或者目标端注册完成之后。 所注册 table 
通过 Table API 和 SQL 的声明方式去访问和使用。
 
-NOTE If you want to implement your own 
*custom* table source or sink, have a look at the [user-defined sources & sinks 
page]({% link dev/table/sourceSinks.zh.md %}).
+笔记如果你想要实现属于你自定义 table 源端 或 目标端。 可以查看 
[user-defined sources & sinks page]({% link dev/table/sourceSinks.zh.md %})。

Review comment:
   ```suggestion
   注意如果你想要实现自定义 table source 或 sink, 可以查看 
[自定义 source 和 sink]({% link dev/table/sourceSinks.zh.md %})。
   ```

##
File path: docs/dev/table/connectors/index.zh.md
##
@@ -25,108 +25,108 @@ under the License.
 -->
 
 
-Flink's Table API & SQL programs can be connected to other external systems 
for reading and writing both batch and streaming tables. A table source 
provides access to data which is stored in external systems (such as a 
database, key-value store, message queue, or file system). A table sink emits a 
table to an external storage system. Depending on the type of source and sink, 
they support different formats such as CSV, Avro, Parquet, or ORC.
+Flink Table 和 SQL 可以支持对外部系统进行读和写的 table 批和流的处理。table 源端对外部数据拥有多种支持类型(例如 
数据库,键值对存储,消息队列,或者是文件系统)。table 目标端可以将表存储到另一个外部的系统中。可以支持指定类型的源端和目标端, 它们目前支持的格式有 
CSV,Avro,Parquet,ORC。
 
-This page describes how to register table sources and table sinks in Flink 
using the natively supported connectors. After a source or sink has been 
registered, it can be accessed by Table API & SQL statements.
+这页主要描述的是如何使用现有flink 原生态支持连接器去注册 table 源端和 table 目标端。 在源端或者目标端注册完成之后。 所注册 table 
通过 Table API 和 SQL 的声明方式去访问和使用。
 
-NOTE If you want to implement your own 
*custom* table source or sink, have a look at the [user-defined sources & sinks 
page]({% link dev/table/sourceSinks.zh.md %}).
+笔记如果你想要实现属于你自定义 table 源端 或 目标端。 可以查看 
[user-defined sources & sinks page]({% link dev/table/sourceSinks.zh.md %})。
 
-Attention Flink Table & SQL introduces 
a new set of connector options since 1.11.0, if you are using the legacy 
connector options, please refer to the [legacy documentation]({% link 
dev/table/connect.zh.md 

[GitHub] [flink] flinkbot edited a comment on pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12056:
URL: https://github.com/apache/flink/pull/12056#issuecomment-626167900


   
   ## CI report:
   
   * c42ef1f62396948ece865a3b8628bea229d5ffb9 UNKNOWN
   * 85e86fe4320767d9bba5b0f24d42772d8cbaf9be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3860)
 
   * fb66f2e5d16daeabafaa62eab9493112997b9f74 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4088)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] senegalo commented on pull request #12729: [FLINK-10195][connectors/rabbitmq] Allow setting QoS

2020-06-28 Thread GitBox


senegalo commented on pull request #12729:
URL: https://github.com/apache/flink/pull/12729#issuecomment-650744581


   @austince Really nice Awesome work .. i too need this badly in one of our 
production systems as we always run into memory issues when we restart the app 
and have to bump it to ridiculous values when restarting the app !  
   
   Had just the one tiny comment regarding the `global` value in the `basicQos` 
method.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] senegalo commented on a change in pull request #12729: [FLINK-10195][connectors/rabbitmq] Allow setting QoS

2020-06-28 Thread GitBox


senegalo commented on a change in pull request #12729:
URL: https://github.com/apache/flink/pull/12729#discussion_r446643314



##
File path: 
flink-connectors/flink-connector-rabbitmq/src/main/java/org/apache/flink/streaming/connectors/rabbitmq/RMQSource.java
##
@@ -141,6 +140,22 @@ protected Connection setupConnection() throws Exception {
return setupConnectionFactory().newConnection();
}
 
+   /**
+* Initializes the consumer's {@link Channel}. If a prefetch count has 
been set in {@link RMQConnectionConfig},
+* the new channel will be use it for {@link Channel#basicQos(int)}.
+*
+* @param connection the consumer's {@link Connection}.
+* @return the channel.
+* @throws Exception if there is an issue creating or configuring the 
channel.
+*/
+   protected Channel setupChannel(Connection connection) throws Exception {
+   Channel chan = connection.createChannel();
+   if (rmqConnectionConfig.getPrefetchCount().isPresent()) {
+   
chan.basicQos(rmqConnectionConfig.getPrefetchCount().get());

Review comment:
   should we set the global flag to true ? to enforce having only that 
limit throughout the channel ? 
   i am worried if someone adds a functionality that say requires having 
multiple consumers on the same channel that then the `preFetchCount` would be 
per consumer and the preFetchCount set by the client would suddenly mean a 
completely different thing. But if enforced here to be per channel it will have 
the same effect for the user.
   
   According to the documentation  `chan.basicQos(prefectchCount, boolean 
global)` and the boolean `global` is: 
   
   ```
   false | applied separately to each new consumer on the channel
   true  | shared across all consumers on the channel
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-10672) Task stuck while writing output to flink

2020-06-28 Thread Peidian Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147323#comment-17147323
 ] 

Peidian Li commented on FLINK-10672:


I met the same problem with Flink 1.9. Is there any conclusion about this 
problem?

> Task stuck while writing output to flink
> 
>
> Key: FLINK-10672
> URL: https://issues.apache.org/jira/browse/FLINK-10672
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Affects Versions: 1.5.4
> Environment: OS: Debuan rodente 4.17
> Flink version: 1.5.4
> ||Key||Value||
> |jobmanager.heap.mb|1024|
> |jobmanager.rpc.address|localhost|
> |jobmanager.rpc.port|6123|
> |metrics.reporter.jmx.class|org.apache.flink.metrics.jmx.JMXReporter|
> |metrics.reporter.jmx.port|9250-9260|
> |metrics.reporters|jmx|
> |parallelism.default|1|
> |rest.port|8081|
> |taskmanager.heap.mb|1024|
> |taskmanager.numberOfTaskSlots|1|
> |web.tmpdir|/tmp/flink-web-bdb73d6c-5b9e-47b5-9ebf-eed0a7c82c26|
>  
> h1. Overview
> ||Data Port||All Slots||Free Slots||CPU Cores||Physical Memory||JVM Heap 
> Size||Flink Managed Memory||
> |43501|1|0|12|62.9 GB|922 MB|642 MB|
> h1. Memory
> h2. JVM (Heap/Non-Heap)
> ||Type||Committed||Used||Maximum||
> |Heap|922 MB|575 MB|922 MB|
> |Non-Heap|68.8 MB|64.3 MB|-1 B|
> |Total|991 MB|639 MB|922 MB|
> h2. Outside JVM
> ||Type||Count||Used||Capacity||
> |Direct|3,292|105 MB|105 MB|
> |Mapped|0|0 B|0 B|
> h1. Network
> h2. Memory Segments
> ||Type||Count||
> |Available|3,194|
> |Total|3,278|
> h1. Garbage Collection
> ||Collector||Count||Time||
> |G1_Young_Generation|13|336|
> |G1_Old_Generation|1|21|
>Reporter: Ankur Goenka
>Assignee: Yun Gao
>Priority: Major
>  Labels: beam
> Attachments: 0.14_all_jobs.jpg, 1uruvakHxBu.png, 3aDKQ24WvKk.png, 
> Po89UGDn58V.png, WithBroadcastJob.png, jmx_dump.json, jmx_dump_detailed.json, 
> jstack_129827.log, jstack_163822.log, jstack_66985.log
>
>
> I am running a fairly complex pipleline with 200+ task.
> The pipeline works fine with small data (order of 10kb input) but gets stuck 
> with a slightly larger data (300kb input).
>  
> The task gets stuck while writing the output toFlink, more specifically it 
> gets stuck while requesting memory segment in local buffer pool. The Task 
> manager UI shows that it has enough memory and memory segments to work with.
> The relevant stack trace is 
> {quote}"grpc-default-executor-0" #138 daemon prio=5 os_prio=0 
> tid=0x7fedb0163800 nid=0x30b7f in Object.wait() [0x7fedb4f9]
>  java.lang.Thread.State: TIMED_WAITING (on object monitor)
>  at (C/C++) 0x7fef201c7dae (Unknown Source)
>  at (C/C++) 0x7fef1f2aea07 (Unknown Source)
>  at (C/C++) 0x7fef1f241cd3 (Unknown Source)
>  at java.lang.Object.wait(Native Method)
>  - waiting on <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestMemorySegment(LocalBufferPool.java:247)
>  - locked <0xf6d56450> (a java.util.ArrayDeque)
>  at 
> org.apache.flink.runtime.io.network.buffer.LocalBufferPool.requestBufferBuilderBlocking(LocalBufferPool.java:204)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.requestNewBufferBuilder(RecordWriter.java:213)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.sendToTarget(RecordWriter.java:144)
>  at 
> org.apache.flink.runtime.io.network.api.writer.RecordWriter.emit(RecordWriter.java:107)
>  at 
> org.apache.flink.runtime.operators.shipping.OutputCollector.collect(OutputCollector.java:65)
>  at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:42)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStagePruningFunction.flatMap(FlinkExecutableStagePruningFunction.java:26)
>  at 
> org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.collect(ChainedFlatMapDriver.java:80)
>  at 
> org.apache.flink.runtime.operators.util.metrics.CountingCollector.collect(CountingCollector.java:35)
>  at 
> org.apache.beam.runners.flink.translation.functions.FlinkExecutableStageFunction$MyDataReceiver.accept(FlinkExecutableStageFunction.java:230)
>  - locked <0xf6a60bd0> (a java.lang.Object)
>  at 
> org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.accept(BeamFnDataInboundObserver.java:81)
>  at 
> org.apache.beam.sdk.fn.data.BeamFnDataInboundObserver.accept(BeamFnDataInboundObserver.java:32)
>  at 
> org.apache.beam.sdk.fn.data.BeamFnDataGrpcMultiplexer$InboundObserver.onNext(BeamFnDataGrpcMultiplexer.java:139)
>  at 
> 

[GitHub] [flink] wangpeibin713 commented on pull request #12780: [FLINK-18441][Table SQL/Runtime] Support Local-Global Aggregation with LastValue Funciton

2020-06-28 Thread GitBox


wangpeibin713 commented on pull request #12780:
URL: https://github.com/apache/flink/pull/12780#issuecomment-650743482


   @flinkbot run azure



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18442) Move `testSessionWindowsWithContinuousEventTimeTrigger` to `ContinuousEventTimeTriggerTest`

2020-06-28 Thread Lijie Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147322#comment-17147322
 ] 

Lijie Wang commented on FLINK-18442:


cc [~aljoscha]

> Move `testSessionWindowsWithContinuousEventTimeTrigger` to 
> `ContinuousEventTimeTriggerTest`
> ---
>
> Key: FLINK-18442
> URL: https://issues.apache.org/jira/browse/FLINK-18442
> Project: Flink
>  Issue Type: Test
>  Components: Tests
>Reporter: Lijie Wang
>Priority: Major
> Fix For: 1.12.0
>
>
> `testSessionWindowsWithContinuousEventTimeTrigger` in `WindowOperatorTest` is 
> introduced when fix 
> [FLINK-4862|https://issues.apache.org/jira/browse/FLINK-4862].
> But it's better to move `testSessionWindowsWithContinuousEventTimeTrigger` 
> into `ContinuousEventTimeTriggerTest`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (FLINK-18442) Move `testSessionWindowsWithContinuousEventTimeTrigger` to `ContinuousEventTimeTriggerTest`

2020-06-28 Thread Lijie Wang (Jira)
Lijie Wang created FLINK-18442:
--

 Summary: Move `testSessionWindowsWithContinuousEventTimeTrigger` 
to `ContinuousEventTimeTriggerTest`
 Key: FLINK-18442
 URL: https://issues.apache.org/jira/browse/FLINK-18442
 Project: Flink
  Issue Type: Test
  Components: Tests
Reporter: Lijie Wang
 Fix For: 1.12.0


`testSessionWindowsWithContinuousEventTimeTrigger` in `WindowOperatorTest` is 
introduced when fix 
[FLINK-4862|https://issues.apache.org/jira/browse/FLINK-4862].

But it's better to move `testSessionWindowsWithContinuousEventTimeTrigger` into 
`ContinuousEventTimeTriggerTest`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12056:
URL: https://github.com/apache/flink/pull/12056#issuecomment-626167900


   
   ## CI report:
   
   * c42ef1f62396948ece865a3b8628bea229d5ffb9 UNKNOWN
   * 85e86fe4320767d9bba5b0f24d42772d8cbaf9be Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3860)
 
   * fb66f2e5d16daeabafaa62eab9493112997b9f74 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] senegalo commented on pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-28 Thread GitBox


senegalo commented on pull request #12056:
URL: https://github.com/apache/flink/pull/12056#issuecomment-650738330


   @dawidwys @austince Thanks for the "get well" wishes :) i am doing much 
better now.
   
   I finished some of the comments you guys addressed i also had some comments 
/ clarification requests on some other.
   
   can't stress enough .. thank you for your time :)



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] senegalo commented on a change in pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-28 Thread GitBox


senegalo commented on a change in pull request #12056:
URL: https://github.com/apache/flink/pull/12056#discussion_r446638002



##
File path: 
flink-connectors/flink-connector-rabbitmq/src/main/java/org/apache/flink/streaming/connectors/rabbitmq/RMQSource.java
##
@@ -239,25 +294,75 @@ public void run(SourceContext ctx) throws Exception {
}
}
 
-   private class RMQCollector implements Collector {
-
+   /**
+* Special collector for RMQ messages.
+* Captures the correlation ID and delivery tag also does the filtering 
logic for weather a message has been
+* processed or not.
+*/
+   private class RMQCollectorImpl implements 
RMQDeserializationSchema.RMQCollector {
private final SourceContext ctx;
private boolean endOfStreamSignalled = false;
+   private long deliveryTag;
+   private Boolean preCheckFlag;
 
-   private RMQCollector(SourceContext ctx) {
+   private RMQCollectorImpl(SourceContext ctx) {
this.ctx = ctx;
}
 
@Override
public void collect(OUT record) {
-   if (endOfStreamSignalled || 
schema.isEndOfStream(record)) {
-   this.endOfStreamSignalled = true;
+   Preconditions.checkNotNull(preCheckFlag, 
"setCorrelationID must be called at least once before" +
+   "calling this method !");
+
+   if (!preCheckFlag) {
return;
}
 
+   if (isEndOfStream(record)) {
+   this.endOfStreamSignalled = true;
+   return;
+   }
ctx.collect(record);
}
 
+   public void collect(List records) {
+   Preconditions.checkNotNull(preCheckFlag, 
"setCorrelationID must be called at least once before" +
+   "calling this method !");
+
+   if (!preCheckFlag) {
+   return;
+   }
+
+   for (OUT record : records){
+   if (isEndOfStream(record)) {
+   this.endOfStreamSignalled = true;
+   return;
+   }
+   ctx.collect(record);
+   }
+   }
+
+   public void setMessageIdentifiers(String correlationId, long 
deliveryTag){
+   preCheckFlag = true;
+   if (!autoAck) {
+   if (usesCorrelationId) {
+   
Preconditions.checkNotNull(correlationId, "RabbitMQ source was instantiated " +
+   "with usesCorrelationId set to 
true yet we couldn't extract the correlation id from it !");
+   if (!addId(correlationId)) {
+   // we have already processed 
this message
+   preCheckFlag = false;
+   }
+   }
+   sessionIds.add(deliveryTag);
+   }
+   }

Review comment:
   That was actually my initial approach but here is what i faced that made 
me split the validation logic and write the code the way i did.
   
   Given that we also might be calling the `collect(List record)` we would 
either have to:
   * copy paste the validation logic so it runs also is checked there.
   * make the `collect(List record)` call the `collect(T record) method 
internally to add singular records but then the validation code will be called 
multiple times and yielding the same result.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12772: [FLINK-18394][docs-zh] Translate "Parquet Format" page into Chinese

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12772:
URL: https://github.com/apache/flink/pull/12772#issuecomment-650132836


   
   ## CI report:
   
   * dc5fc0fbcc42a47c3d375d40a2c230829cb9 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4079)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-18440) ROW_NUMBER function: ROW/RANGE not allowed with RANK, DENSE_RANK or ROW_NUMBER functions

2020-06-28 Thread JinxinTang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147311#comment-17147311
 ] 

JinxinTang edited comment on FLINK-18440 at 6/28/20, 11:07 AM:
---

!image-2020-06-28-18-55-43-692.png!

Hope this could help :)


was (Author: jinxintang):
!image-2020-06-28-18-55-43-692.png!

> ROW_NUMBER function: ROW/RANGE not allowed with RANK, DENSE_RANK or 
> ROW_NUMBER functions
> 
>
> Key: FLINK-18440
> URL: https://issues.apache.org/jira/browse/FLINK-18440
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.0
>Reporter: LakeShen
>Priority: Major
> Fix For: 1.11.1
>
> Attachments: image-2020-06-28-18-55-43-692.png
>
>
> When I run flink sql ,the flink sql like this:
> create view test as select  name, eat ,sum(age) as cnt from test_source group 
> by  name,eat;
> create view results as select *, ROW_NUMBER() OVER (PARTITION BY name 
> ORDER BY cnt DESC) as row_num from test;
> create table sink (
> name varchar,
> eat varchar,
> cnt bigint
> )
> with(
> 'connector' = 'print'
> );
> insert into sink select name,eat , cnt from results where  row_num <= 3 ;
> The same sql code I could run success in flink 1.10, now I change the flink 
> version into flink 1.11, it throw the exception.
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 124 to line 1, column 127: 
> ROW/RANGE not allowed with RANK, DENSE_RANK or ROW_NUMBER functions
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl$ToRelContextImpl.expandView(FlinkPlannerImpl.scala:204)
>   at 
> org.apache.calcite.plan.ViewExpanders$1.expandView(ViewExpanders.java:52)
>   at 
> org.apache.flink.table.planner.catalog.SqlCatalogViewTable.convertToRel(SqlCatalogViewTable.java:58)
>   at 
> org.apache.flink.table.planner.plan.schema.ExpandingPreparingTable.expand(ExpandingPreparingTable.java:59)
>   at 
> org.apache.flink.table.planner.plan.schema.ExpandingPreparingTable.toRel(ExpandingPreparingTable.java:55)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3492)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2415)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2102)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:661)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:642)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3345)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:568)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Issue Comment Deleted] (FLINK-18440) ROW_NUMBER function: ROW/RANGE not allowed with RANK, DENSE_RANK or ROW_NUMBER functions

2020-06-28 Thread JinxinTang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JinxinTang updated FLINK-18440:
---
Comment: was deleted

(was: Maybe this could help. :))

> ROW_NUMBER function: ROW/RANGE not allowed with RANK, DENSE_RANK or 
> ROW_NUMBER functions
> 
>
> Key: FLINK-18440
> URL: https://issues.apache.org/jira/browse/FLINK-18440
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.0
>Reporter: LakeShen
>Priority: Major
> Fix For: 1.11.1
>
> Attachments: image-2020-06-28-18-55-43-692.png
>
>
> When I run flink sql ,the flink sql like this:
> create view test as select  name, eat ,sum(age) as cnt from test_source group 
> by  name,eat;
> create view results as select *, ROW_NUMBER() OVER (PARTITION BY name 
> ORDER BY cnt DESC) as row_num from test;
> create table sink (
> name varchar,
> eat varchar,
> cnt bigint
> )
> with(
> 'connector' = 'print'
> );
> insert into sink select name,eat , cnt from results where  row_num <= 3 ;
> The same sql code I could run success in flink 1.10, now I change the flink 
> version into flink 1.11, it throw the exception.
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 124 to line 1, column 127: 
> ROW/RANGE not allowed with RANK, DENSE_RANK or ROW_NUMBER functions
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl$ToRelContextImpl.expandView(FlinkPlannerImpl.scala:204)
>   at 
> org.apache.calcite.plan.ViewExpanders$1.expandView(ViewExpanders.java:52)
>   at 
> org.apache.flink.table.planner.catalog.SqlCatalogViewTable.convertToRel(SqlCatalogViewTable.java:58)
>   at 
> org.apache.flink.table.planner.plan.schema.ExpandingPreparingTable.expand(ExpandingPreparingTable.java:59)
>   at 
> org.apache.flink.table.planner.plan.schema.ExpandingPreparingTable.toRel(ExpandingPreparingTable.java:55)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3492)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2415)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2102)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:661)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:642)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3345)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:568)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] senegalo commented on a change in pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-28 Thread GitBox


senegalo commented on a change in pull request #12056:
URL: https://github.com/apache/flink/pull/12056#discussion_r446636396



##
File path: 
flink-connectors/flink-connector-rabbitmq/src/test/java/org/apache/flink/streaming/connectors/rabbitmq/RMQSourceTest.java
##
@@ -382,6 +414,27 @@ public boolean isEndOfStream(String nextElement) {
}
}
 
+   private class CustomDeserializationSchema implements 
RMQDeserializationSchema {
+   @Override
+   public TypeInformation getProducedType() {
+   return TypeExtractor.getForClass(String.class);
+   }
+
+   @Override
+   public void processMessage(Envelope envelope, 
AMQP.BasicProperties properties, byte[] body, RMQCollector collector) throws 
IOException {
+   List messages = new ArrayList();
+   messages.add("I Love Turtles");
+   messages.add("Brush your teeth");

Review comment:
   i am sorry i am known to play around with tests strings :D i can change 
it ... but then what do you have against teeth brushing !? explain yourself 
mister ! :D





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] senegalo commented on a change in pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-28 Thread GitBox


senegalo commented on a change in pull request #12056:
URL: https://github.com/apache/flink/pull/12056#discussion_r446636112



##
File path: 
flink-connectors/flink-connector-rabbitmq/src/main/java/org/apache/flink/streaming/connectors/rabbitmq/RMQSource.java
##
@@ -208,29 +252,40 @@ public void close() throws Exception {
}
}
 
+   /**
+* Parse and collects the body of the an AMQP message.
+*
+* If any of the constructors with the {@link DeserializationSchema} 
class was used to construct the source
+* it uses the {@link DeserializationSchema#deserialize(byte[])} to 
parse the body of the AMQP message.
+*
+* If any of the constructors with the {@link 
RMQDeserializationSchema } class was used to construct the source it uses the
+* {@link RMQDeserializationSchema#processMessage(Envelope, 
AMQP.BasicProperties, byte[], RMQDeserializationSchema.RMQCollector collector)}
+* method of that provided instance.
+*
+* @param delivery the AMQP {@link QueueingConsumer.Delivery}
+* @param collector a {@link RMQCollectorImpl} to collect the data
+* @throws IOException
+*/
+   protected void processMessage(QueueingConsumer.Delivery delivery, 
RMQDeserializationSchema.RMQCollector collector) throws IOException {
+   AMQP.BasicProperties properties = delivery.getProperties();
+   byte[] body = delivery.getBody();
+
+   if (deliveryDeserializer != null){
+   Envelope envelope = delivery.getEnvelope();
+   deliveryDeserializer.processMessage(envelope, 
properties, body, collector);
+   } else {
+   
collector.setMessageIdentifiers(properties.getCorrelationId(), 
delivery.getEnvelope().getDeliveryTag());
+   collector.collect(schema.deserialize(body));
+   }
+   }

Review comment:
   actually after your awesome suggestion of wrapping the deserilaztion 
schema into the new interface this method is now:
   
   ```
AMQP.BasicProperties properties = delivery.getProperties();
byte[] body = delivery.getBody();
Envelope envelope = delivery.getEnvelope();
   
deliveryDeserializer.deserialize(envelope, properties, body, 
collector);
   ```
   Hence my comment on top "Elegant Solution !"





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] senegalo commented on a change in pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-28 Thread GitBox


senegalo commented on a change in pull request #12056:
URL: https://github.com/apache/flink/pull/12056#discussion_r446635690



##
File path: 
flink-connectors/flink-connector-rabbitmq/src/main/java/org/apache/flink/streaming/connectors/rabbitmq/RMQDeserializationSchema.java
##
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.rabbitmq;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.ResultTypeQueryable;
+import org.apache.flink.util.Collector;
+
+import com.rabbitmq.client.AMQP;
+import com.rabbitmq.client.Envelope;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.List;
+
+/**
+ * Interface for the set of methods required to parse an RMQ delivery.
+ * @param  The output type of the {@link RMQSource}
+ */
+public interface RMQDeserializationSchema extends Serializable, 
ResultTypeQueryable {
+   /**
+* This method takes all the RabbitMQ delivery information supplied by 
the client extract the data and pass it to the
+* collector.
+* NOTICE: The implementation of this method MUST call {@link 
RMQCollector#setMessageIdentifiers(String, long)} with
+* the correlation ID of the message if checkpointing and 
UseCorrelationID (in the RMQSource constructor) were enabled
+* the {@link RMQSource}.
+* @param envelope
+* @param properties
+* @param body
+* @throws IOException
+*/
+   public  void processMessage(Envelope envelope, AMQP.BasicProperties 
properties, byte[] body, RMQCollector collector) throws IOException;
+
+   /**
+* Method to decide whether the element signals the end of the stream. 
If
+* true is returned the element won't be emitted.
+*
+* @param nextElement The element to test for the end-of-stream signal.
+* @return True, if the element signals end of stream, false otherwise.
+*/
+   public boolean isEndOfStream(T nextElement);

Review comment:
   nit pick all you want ! 
   
   `isEndOfStream` is closed for public by order of @austince  till further 
notice !





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] senegalo commented on a change in pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-28 Thread GitBox


senegalo commented on a change in pull request #12056:
URL: https://github.com/apache/flink/pull/12056#discussion_r446635536



##
File path: 
flink-connectors/flink-connector-rabbitmq/src/main/java/org/apache/flink/streaming/connectors/rabbitmq/RMQDeserializationSchema.java
##
@@ -0,0 +1,78 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.streaming.connectors.rabbitmq;
+
+import org.apache.flink.api.common.typeinfo.TypeInformation;
+import org.apache.flink.api.java.typeutils.ResultTypeQueryable;
+import org.apache.flink.util.Collector;
+
+import com.rabbitmq.client.AMQP;
+import com.rabbitmq.client.Envelope;
+
+import java.io.IOException;
+import java.io.Serializable;
+import java.util.List;
+
+/**
+ * Interface for the set of methods required to parse an RMQ delivery.
+ * @param  The output type of the {@link RMQSource}
+ */
+public interface RMQDeserializationSchema extends Serializable, 
ResultTypeQueryable {
+   /**
+* This method takes all the RabbitMQ delivery information supplied by 
the client extract the data and pass it to the
+* collector.
+* NOTICE: The implementation of this method MUST call {@link 
RMQCollector#setMessageIdentifiers(String, long)} with
+* the correlation ID of the message if checkpointing and 
UseCorrelationID (in the RMQSource constructor) were enabled
+* the {@link RMQSource}.
+* @param envelope
+* @param properties
+* @param body
+* @throws IOException
+*/
+   public  void processMessage(Envelope envelope, AMQP.BasicProperties 
properties, byte[] body, RMQCollector collector) throws IOException;
+
+   /**
+* Method to decide whether the element signals the end of the stream. 
If
+* true is returned the element won't be emitted.
+*
+* @param nextElement The element to test for the end-of-stream signal.
+* @return True, if the element signals end of stream, false otherwise.
+*/
+   public boolean isEndOfStream(T nextElement);
+
+   /**
+* The {@link TypeInformation} for the deserialized T.
+* As an example the proper implementation of this method if T is a 
String is:
+* {@code return TypeExtractor.getForClass(String.class)}
+* @return TypeInformation
+*/
+   public TypeInformation getProducedType();
+
+
+   /**
+* Special collector for RMQ messages.
+* Captures the correlation ID and delivery tag also does the filtering 
logic for weather a message has been
+* processed or not.
+* @param 
+*/
+   public interface RMQCollector extends Collector {
+   public void collect(List records);
+
+   public void setMessageIdentifiers(String correlationId, long 
deliveryTag);
+   }

Review comment:
   The point of having the method `setMessageIdentifiers` was the make sure 
that the user supplies the `correlationId` and `deliveryTag` before calling 
either `collect(T record)` or `collect(List record` since those are used to 
validate that the record has not been added before.
   
   If we go with the proposed interface we will have to call the 
`collect(String correlationId, long deliveryTag, T record)` method always and 
if you call `void collect(T record)` either run it without the checked because 
the `correlationId` and `deliveryTag` where not supplied or throw an error.
   
   I also didn't understand the `reset()` method :D and it's relationship with 
the state. 
   
   i'll keep the interface as is till you guys clarify this little bit of info 
for me.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18440) ROW_NUMBER function: ROW/RANGE not allowed with RANK, DENSE_RANK or ROW_NUMBER functions

2020-06-28 Thread JinxinTang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147312#comment-17147312
 ] 

JinxinTang commented on FLINK-18440:


Maybe this could help. :)

> ROW_NUMBER function: ROW/RANGE not allowed with RANK, DENSE_RANK or 
> ROW_NUMBER functions
> 
>
> Key: FLINK-18440
> URL: https://issues.apache.org/jira/browse/FLINK-18440
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.0
>Reporter: LakeShen
>Priority: Major
> Fix For: 1.11.1
>
> Attachments: image-2020-06-28-18-55-43-692.png
>
>
> When I run flink sql ,the flink sql like this:
> create view test as select  name, eat ,sum(age) as cnt from test_source group 
> by  name,eat;
> create view results as select *, ROW_NUMBER() OVER (PARTITION BY name 
> ORDER BY cnt DESC) as row_num from test;
> create table sink (
> name varchar,
> eat varchar,
> cnt bigint
> )
> with(
> 'connector' = 'print'
> );
> insert into sink select name,eat , cnt from results where  row_num <= 3 ;
> The same sql code I could run success in flink 1.10, now I change the flink 
> version into flink 1.11, it throw the exception.
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 124 to line 1, column 127: 
> ROW/RANGE not allowed with RANK, DENSE_RANK or ROW_NUMBER functions
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl$ToRelContextImpl.expandView(FlinkPlannerImpl.scala:204)
>   at 
> org.apache.calcite.plan.ViewExpanders$1.expandView(ViewExpanders.java:52)
>   at 
> org.apache.flink.table.planner.catalog.SqlCatalogViewTable.convertToRel(SqlCatalogViewTable.java:58)
>   at 
> org.apache.flink.table.planner.plan.schema.ExpandingPreparingTable.expand(ExpandingPreparingTable.java:59)
>   at 
> org.apache.flink.table.planner.plan.schema.ExpandingPreparingTable.toRel(ExpandingPreparingTable.java:55)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3492)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2415)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2102)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:661)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:642)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3345)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:568)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18440) ROW_NUMBER function: ROW/RANGE not allowed with RANK, DENSE_RANK or ROW_NUMBER functions

2020-06-28 Thread JinxinTang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147311#comment-17147311
 ] 

JinxinTang commented on FLINK-18440:


!image-2020-06-28-18-55-43-692.png!

> ROW_NUMBER function: ROW/RANGE not allowed with RANK, DENSE_RANK or 
> ROW_NUMBER functions
> 
>
> Key: FLINK-18440
> URL: https://issues.apache.org/jira/browse/FLINK-18440
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.0
>Reporter: LakeShen
>Priority: Major
> Fix For: 1.11.1
>
> Attachments: image-2020-06-28-18-55-43-692.png
>
>
> When I run flink sql ,the flink sql like this:
> create view test as select  name, eat ,sum(age) as cnt from test_source group 
> by  name,eat;
> create view results as select *, ROW_NUMBER() OVER (PARTITION BY name 
> ORDER BY cnt DESC) as row_num from test;
> create table sink (
> name varchar,
> eat varchar,
> cnt bigint
> )
> with(
> 'connector' = 'print'
> );
> insert into sink select name,eat , cnt from results where  row_num <= 3 ;
> The same sql code I could run success in flink 1.10, now I change the flink 
> version into flink 1.11, it throw the exception.
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 124 to line 1, column 127: 
> ROW/RANGE not allowed with RANK, DENSE_RANK or ROW_NUMBER functions
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl$ToRelContextImpl.expandView(FlinkPlannerImpl.scala:204)
>   at 
> org.apache.calcite.plan.ViewExpanders$1.expandView(ViewExpanders.java:52)
>   at 
> org.apache.flink.table.planner.catalog.SqlCatalogViewTable.convertToRel(SqlCatalogViewTable.java:58)
>   at 
> org.apache.flink.table.planner.plan.schema.ExpandingPreparingTable.expand(ExpandingPreparingTable.java:59)
>   at 
> org.apache.flink.table.planner.plan.schema.ExpandingPreparingTable.toRel(ExpandingPreparingTable.java:55)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3492)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2415)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2102)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:661)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:642)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3345)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:568)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-18440) ROW_NUMBER function: ROW/RANGE not allowed with RANK, DENSE_RANK or ROW_NUMBER functions

2020-06-28 Thread JinxinTang (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-18440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

JinxinTang updated FLINK-18440:
---
Attachment: image-2020-06-28-18-55-43-692.png

> ROW_NUMBER function: ROW/RANGE not allowed with RANK, DENSE_RANK or 
> ROW_NUMBER functions
> 
>
> Key: FLINK-18440
> URL: https://issues.apache.org/jira/browse/FLINK-18440
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Runtime
>Affects Versions: 1.11.0
>Reporter: LakeShen
>Priority: Major
> Fix For: 1.11.1
>
> Attachments: image-2020-06-28-18-55-43-692.png
>
>
> When I run flink sql ,the flink sql like this:
> create view test as select  name, eat ,sum(age) as cnt from test_source group 
> by  name,eat;
> create view results as select *, ROW_NUMBER() OVER (PARTITION BY name 
> ORDER BY cnt DESC) as row_num from test;
> create table sink (
> name varchar,
> eat varchar,
> cnt bigint
> )
> with(
> 'connector' = 'print'
> );
> insert into sink select name,eat , cnt from results where  row_num <= 3 ;
> The same sql code I could run success in flink 1.10, now I change the flink 
> version into flink 1.11, it throw the exception.
> Exception in thread "main" org.apache.flink.table.api.ValidationException: 
> SQL validation failed. From line 1, column 124 to line 1, column 127: 
> ROW/RANGE not allowed with RANK, DENSE_RANK or ROW_NUMBER functions
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$validate(FlinkPlannerImpl.scala:146)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl$ToRelContextImpl.expandView(FlinkPlannerImpl.scala:204)
>   at 
> org.apache.calcite.plan.ViewExpanders$1.expandView(ViewExpanders.java:52)
>   at 
> org.apache.flink.table.planner.catalog.SqlCatalogViewTable.convertToRel(SqlCatalogViewTable.java:58)
>   at 
> org.apache.flink.table.planner.plan.schema.ExpandingPreparingTable.expand(ExpandingPreparingTable.java:59)
>   at 
> org.apache.flink.table.planner.plan.schema.ExpandingPreparingTable.toRel(ExpandingPreparingTable.java:55)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.toRel(SqlToRelConverter.java:3492)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertIdentifier(SqlToRelConverter.java:2415)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2102)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertFrom(SqlToRelConverter.java:2051)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelectImpl(SqlToRelConverter.java:661)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertSelect(SqlToRelConverter.java:642)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQueryRecursive(SqlToRelConverter.java:3345)
>   at 
> org.apache.calcite.sql2rel.SqlToRelConverter.convertQuery(SqlToRelConverter.java:568)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.org$apache$flink$table$planner$calcite$FlinkPlannerImpl$$rel(FlinkPlannerImpl.scala:164)
>   at 
> org.apache.flink.table.planner.calcite.FlinkPlannerImpl.rel(FlinkPlannerImpl.scala:151)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.toQueryOperation(SqlToOperationConverter.java:773)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlQuery(SqlToOperationConverter.java:745)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convert(SqlToOperationConverter.java:238)
>   at 
> org.apache.flink.table.planner.operations.SqlToOperationConverter.convertSqlInsert(SqlToOperationConverter.java:527)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] senegalo commented on a change in pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-28 Thread GitBox


senegalo commented on a change in pull request #12056:
URL: https://github.com/apache/flink/pull/12056#discussion_r446633295



##
File path: 
flink-connectors/flink-connector-rabbitmq/src/main/java/org/apache/flink/streaming/connectors/rabbitmq/RMQSource.java
##
@@ -77,6 +78,7 @@
private final RMQConnectionConfig rmqConnectionConfig;
protected final String queueName;
private final boolean usesCorrelationId;
+   protected RMQDeserializationSchema deliveryDeserializer;
protected DeserializationSchema schema;

Review comment:
   just a curious question "i am not that good with Java tbh ! 
   Why a static function here ? why not a private method since it will only be 
used from within the class ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] senegalo commented on a change in pull request #12056: [FLINK-17502] [flink-connector-rabbitmq] RMQSource refactor

2020-06-28 Thread GitBox


senegalo commented on a change in pull request #12056:
URL: https://github.com/apache/flink/pull/12056#discussion_r446633193



##
File path: 
flink-connectors/flink-connector-rabbitmq/src/main/java/org/apache/flink/streaming/connectors/rabbitmq/RMQSource.java
##
@@ -77,6 +78,7 @@
private final RMQConnectionConfig rmqConnectionConfig;
protected final String queueName;
private final boolean usesCorrelationId;
+   protected RMQDeserializationSchema deliveryDeserializer;
protected DeserializationSchema schema;

Review comment:
   Elegant solution !! 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12772: [FLINK-18394][docs-zh] Translate "Parquet Format" page into Chinese

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12772:
URL: https://github.com/apache/flink/pull/12772#issuecomment-650132836


   
   ## CI report:
   
   * 8a7f44f2328cd6d17eb6f6a5e2b2f9da4db12e9e Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4078)
 
   * dc5fc0fbcc42a47c3d375d40a2c230829cb9 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4079)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12781: [FLINK-18376][Table SQL/Runtime]Fix java.lang.ArrayIndexOutOfBoundsException in RetractableTopNFunction

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12781:
URL: https://github.com/apache/flink/pull/12781#issuecomment-650728077


   
   ## CI report:
   
   * 403cb8c2d677a1eb55e6315f2f79cfd002b30e6e Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4087)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12780: [FLINK-18441][Table SQL/Runtime] Support Local-Global Aggregation with LastValue Funciton

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12780:
URL: https://github.com/apache/flink/pull/12780#issuecomment-650724872


   
   ## CI report:
   
   * f4d32052e1931063bc99f36574ab0f29fe448825 UNKNOWN
   * 4c93837182aa9b5f239ad8fe272db4b684aac2e4 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4083)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-17465) Update Chinese user documentation for job manager memory model

2020-06-28 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-17465:
-
Affects Version/s: 1.11.0

> Update Chinese user documentation for job manager memory model
> --
>
> Key: FLINK-17465
> URL: https://issues.apache.org/jira/browse/FLINK-17465
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Affects Versions: 1.11.0
>Reporter: Andrey Zagrebin
>Assignee: Xintong Song
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> This is a follow-up for FLINK-16946.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-17807) Fix the broken link "/zh/ops/memory/mem_detail.html" in documentation

2020-06-28 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song updated FLINK-17807:
-
Affects Version/s: 1.11.0

> Fix the broken link "/zh/ops/memory/mem_detail.html" in documentation
> -
>
> Key: FLINK-17807
> URL: https://issues.apache.org/jira/browse/FLINK-17807
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Affects Versions: 1.11.0
>Reporter: Jark Wu
>Assignee: Xintong Song
>Priority: Major
> Fix For: 1.11.0
>
>
> We may need to update {{mem_setup_tm.zh.md}} and {{mem_trouble.zh.md}} to 
> resolve the remaining broken link:
> http://localhost:4000/zh/ops/memory/mem_detail.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Closed] (FLINK-17807) Fix the broken link "/zh/ops/memory/mem_detail.html" in documentation

2020-06-28 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17807?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-17807.

Fix Version/s: 1.11.0
   Resolution: Fixed

This has been fixed by FLINK-17465.

> Fix the broken link "/zh/ops/memory/mem_detail.html" in documentation
> -
>
> Key: FLINK-17807
> URL: https://issues.apache.org/jira/browse/FLINK-17807
> Project: Flink
>  Issue Type: Bug
>  Components: Documentation
>Reporter: Jark Wu
>Assignee: Xintong Song
>Priority: Major
> Fix For: 1.11.0
>
>
> We may need to update {{mem_setup_tm.zh.md}} and {{mem_trouble.zh.md}} to 
> resolve the remaining broken link:
> http://localhost:4000/zh/ops/memory/mem_detail.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #12779: [FLINK-17920][python][docs] Add the Python example of Time-windowed j…

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12779:
URL: https://github.com/apache/flink/pull/12779#issuecomment-650724847


   
   ## CI report:
   
   * af7a9716eeb50ba8d9dcf5ba38a79d6f053bb3cf Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4082)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #12781: [FLINK-18376][Table SQL/Runtime]Fix java.lang.ArrayIndexOutOfBoundsException in RetractableTopNFunction

2020-06-28 Thread GitBox


flinkbot commented on pull request #12781:
URL: https://github.com/apache/flink/pull/12781#issuecomment-650728077


   
   ## CI report:
   
   * 403cb8c2d677a1eb55e6315f2f79cfd002b30e6e UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #12780: [FLINK-18441][Table SQL/Runtime] Support Local-Global Aggregation with LastValue Funciton

2020-06-28 Thread GitBox


flinkbot edited a comment on pull request #12780:
URL: https://github.com/apache/flink/pull/12780#issuecomment-650724872


   
   ## CI report:
   
   * f4d32052e1931063bc99f36574ab0f29fe448825 UNKNOWN
   * 4c93837182aa9b5f239ad8fe272db4b684aac2e4 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-17465) Update Chinese user documentation for job manager memory model

2020-06-28 Thread Xintong Song (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-17465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xintong Song closed FLINK-17465.

Resolution: Done

Done via:
* master: 02e5977977ed614debd0d22c1373075c1f8d2a08
* release-1.11: a1f4c3309c0bcd6d863ba570f8c88347294e4053

> Update Chinese user documentation for job manager memory model
> --
>
> Key: FLINK-17465
> URL: https://issues.apache.org/jira/browse/FLINK-17465
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation
>Reporter: Andrey Zagrebin
>Assignee: Xintong Song
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.11.0
>
>
> This is a follow-up for FLINK-16946.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] RocMarshal edited a comment on pull request #12778: [FLINK-18422][docs] Update Prefer tag in documentation 'Fault Tolerance training lesson'

2020-06-28 Thread GitBox


RocMarshal edited a comment on pull request #12778:
URL: https://github.com/apache/flink/pull/12778#issuecomment-650690595


   @klion26 
   Could you help me to review this 
PR(https://github.com/apache/flink/pull/12778)?
   Thank you.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-18324) Translate updated data type and function page into Chinese

2020-06-28 Thread Benchao Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17147289#comment-17147289
 ] 

Benchao Li commented on FLINK-18324:


[~liyubin117] Could you also open a pr against release-1.11 branch?

> Translate updated data type and function page into Chinese
> --
>
> Key: FLINK-18324
> URL: https://issues.apache.org/jira/browse/FLINK-18324
> Project: Flink
>  Issue Type: Sub-task
>  Components: chinese-translation, Documentation, Table SQL / API
>Reporter: Timo Walther
>Assignee: Yubin Li
>Priority: Major
>  Labels: pull-request-available
>
> The Chinese translations of the pages updated in FLINK-18248 and FLINK-18065 
> need an update.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


  1   2   >