[GitHub] [flink] chenqin commented on pull request #15442: [FLINK-22081][flink-core] handle entropy injection metadata path in pluggable HadoopS3FileSystem

2021-04-02 Thread GitBox


chenqin commented on pull request #15442:
URL: https://github.com/apache/flink/pull/15442#issuecomment-812803257


   @AHeise can you take a look again?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15484: [FLINK-21999][Runtime/Coordination] uniformize the logic about whether checkpoint is enabled and fix some typo.

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15484:
URL: https://github.com/apache/flink/pull/15484#issuecomment-812771848


   
   ## CI report:
   
   * 35826f519cf9b91ca759b4e13dcc7af811e8d0a8 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16033)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-11838) Create RecoverableWriter for GCS

2021-04-02 Thread Xintong Song (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17314161#comment-17314161
 ] 

Xintong Song commented on FLINK-11838:
--

Thanks for the update, [~galenwarren].

Either updating the old PR or starting a new one works for me. 

> Create RecoverableWriter for GCS
> 
>
> Key: FLINK-11838
> URL: https://issues.apache.org/jira/browse/FLINK-11838
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / FileSystem
>Affects Versions: 1.8.0
>Reporter: Fokko Driesprong
>Assignee: Galen Warren
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.13.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> GCS supports the resumable upload which we can use to create a Recoverable 
> writer similar to the S3 implementation:
> https://cloud.google.com/storage/docs/json_api/v1/how-tos/resumable-upload
> After using the Hadoop compatible interface: 
> https://github.com/apache/flink/pull/7519
> We've noticed that the current implementation relies heavily on the renaming 
> of the files on the commit: 
> https://github.com/apache/flink/blob/master/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopRecoverableFsDataOutputStream.java#L233-L259
> This is suboptimal on an object store such as GCS. Therefore we would like to 
> implement a more GCS native RecoverableWriter 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15484: [FLINK-21999][Runtime/Coordination] uniformize the logic about whether checkpoint is enabled and fix some typo.

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15484:
URL: https://github.com/apache/flink/pull/15484#issuecomment-812771848


   
   ## CI report:
   
   * 35826f519cf9b91ca759b4e13dcc7af811e8d0a8 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16033)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15484: [FLINK-21999][Runtime/Coordination] uniformize the logic about whether checkpoint is enabled and fix some typo.

2021-04-02 Thread GitBox


flinkbot commented on pull request #15484:
URL: https://github.com/apache/flink/pull/15484#issuecomment-812771848


   
   ## CI report:
   
   * 35826f519cf9b91ca759b4e13dcc7af811e8d0a8 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15484: [FLINK-21999][Runtime/Coordination] uniformize the logic about whether checkpoint is enabled and fix some typo.

2021-04-02 Thread GitBox


flinkbot commented on pull request #15484:
URL: https://github.com/apache/flink/pull/15484#issuecomment-812765923


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 35826f519cf9b91ca759b4e13dcc7af811e8d0a8 (Sat Apr 03 
00:37:06 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-21999) The logic about whether Checkpoint is enabled.

2021-04-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-21999:
---
Labels: pull-request-available  (was: )

> The logic about whether Checkpoint is enabled.
> --
>
> Key: FLINK-21999
> URL: https://issues.apache.org/jira/browse/FLINK-21999
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination
>Reporter: ZhangWei
>Assignee: ZhangWei
>Priority: Major
>  Labels: pull-request-available
>
> org.apache.flink.runtime.executiongraph.DefaultExecutionGraphBuilder#isCheckpointingEnabled
>  assumes checkpoint enabled when JobCheckpointingSettings is not null. While 
> this is not enough, we must also guarantee the checkpoint interval is between 
> [MINIMAL_CHECKPOINT_TIME, Long.MaxValue). That is like the 
> JobGraph#isCheckpointingEnabled does.
>In current implement, when we do not set checkpoint interval, leaving it 
> the default value -1, the interval  will be changed to Long.MaxValue. Thus 
> DefaultExecutionGraphBuilder#isCheckpointingEnabled will return true. That is 
> not correct.
> in addition, there are different classes assume checkpoint enabled with 
> different interval range.
> 1. CheckpointConfig -> (0,Long.MaxValue*]*.
> 2. JobGraph -> (0,Long.MaxValue)
> This is not consistent. And the correct range is [MINIMAL_CHECKPOINT_TIME, 
> Long.MaxValue).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] est08zw opened a new pull request #15484: [FLINK-21999][Runtime/Coordination] uniformize the logic about whether checkpoint is enabled and fix some typo.

2021-04-02 Thread GitBox


est08zw opened a new pull request #15484:
URL: https://github.com/apache/flink/pull/15484


   
   
   ## What is the purpose of the change
   
   Make the logic about whether Checkpoint is enabled consistent in 
DefaultExecutionGraphBuilder, CheckpointConfig and JobGraph. Fix some typo in 
these files.
   
   ## Brief change log
   
   Make the logic about whether Checkpoint is enabled consistent in 
DefaultExecutionGraphBuilder, CheckpointConfig and JobGraph. Fix some typo in 
these files.
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] est08zw commented on pull request #15399: [hotfix] add coordinatorState into hashCode() to comply with equals().

2021-04-02 Thread GitBox


est08zw commented on pull request #15399:
URL: https://github.com/apache/flink/pull/15399#issuecomment-812728024


   It is ok to leave it as before if the method mainly used for tests and feel 
free to close this pr then.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Comment Edited] (FLINK-11838) Create RecoverableWriter for GCS

2021-04-02 Thread Galen Warren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17314079#comment-17314079
 ] 

Galen Warren edited comment on FLINK-11838 at 4/2/21, 8:54 PM:
---

[~xintongsong] – actually, is that the right thing to do, start a new PR? I'm 
thinking that might be the cleanest way to proceed, since we never really used 
the old one that I created prematurely, and the info in the old one is out of 
date, but I'll follow your guidance here on what to do.

Old PR is [here|https://github.com/apache/flink/pull/14875]


was (Author: galenwarren):
[~xintongsong] – actually, is that the right thing to do, start a new PR? I'm 
thinking that might be the cleanest way to proceed, since we never really used 
the old one that I created prematurely, and the info in the old one is out of 
date, but I'll follow your guidance here on what to do.

> Create RecoverableWriter for GCS
> 
>
> Key: FLINK-11838
> URL: https://issues.apache.org/jira/browse/FLINK-11838
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / FileSystem
>Affects Versions: 1.8.0
>Reporter: Fokko Driesprong
>Assignee: Galen Warren
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.13.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> GCS supports the resumable upload which we can use to create a Recoverable 
> writer similar to the S3 implementation:
> https://cloud.google.com/storage/docs/json_api/v1/how-tos/resumable-upload
> After using the Hadoop compatible interface: 
> https://github.com/apache/flink/pull/7519
> We've noticed that the current implementation relies heavily on the renaming 
> of the files on the commit: 
> https://github.com/apache/flink/blob/master/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopRecoverableFsDataOutputStream.java#L233-L259
> This is suboptimal on an object store such as GCS. Therefore we would like to 
> implement a more GCS native RecoverableWriter 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-11838) Create RecoverableWriter for GCS

2021-04-02 Thread Galen Warren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17314079#comment-17314079
 ] 

Galen Warren commented on FLINK-11838:
--

[~xintongsong] – actually, is that the right thing to do, start a new PR? I'm 
thinking that might be the cleanest way to proceed, since we never really used 
the old one that I created prematurely, and the info in the old one is out of 
date, but I'll follow your guidance here on what to do.

> Create RecoverableWriter for GCS
> 
>
> Key: FLINK-11838
> URL: https://issues.apache.org/jira/browse/FLINK-11838
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / FileSystem
>Affects Versions: 1.8.0
>Reporter: Fokko Driesprong
>Assignee: Galen Warren
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.13.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> GCS supports the resumable upload which we can use to create a Recoverable 
> writer similar to the S3 implementation:
> https://cloud.google.com/storage/docs/json_api/v1/how-tos/resumable-upload
> After using the Hadoop compatible interface: 
> https://github.com/apache/flink/pull/7519
> We've noticed that the current implementation relies heavily on the renaming 
> of the files on the commit: 
> https://github.com/apache/flink/blob/master/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopRecoverableFsDataOutputStream.java#L233-L259
> This is suboptimal on an object store such as GCS. Therefore we would like to 
> implement a more GCS native RecoverableWriter 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-18071) CoordinatorEventsExactlyOnceITCase.checkListContainsSequence fails on CI

2021-04-02 Thread Kezhu Wang (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-18071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17314074#comment-17314074
 ] 

Kezhu Wang commented on FLINK-18071:


Hi all, I dug and thought some time about this. I want to share what I got. I 
might be wrong though, forgive me then please.

h4. Symptom cause

As [~gaoyunhaii] and [~sewen] pointed out before, there are lingering sendings 
after {{resetToCheckpoint}}. Adds following snippets to {{sendNextEvent}} 
before event sending makes this test fails often.
{code:java}
// This will fails 749620d007e93a6fba6a7d9cb759ec68c7670b00 quite often.
Thread.sleep(50);

// Following make it more often in 1.13 (ps. it is not that often without this 
comparing to above commit).
if (periodicTask == null) {
Thread.sleep(5000);
}
{code}

{{CoordinatorEventsExactlyOnceITCase}} tests only global failover, but not 
region failover. So, a simple wrapper with 
{{RecreateOnResetOperatorCoordinator.Provider}} (plus some minor changes) will 
pass this test.

h4. Root cause

Initially, I thought we might be able do something in runtime to guard this. 
But after tackling the code bit, I realized that it will be hard to guard 
region failover to achieve exactly once in current api:

# Currently, events are sending through {{Context.sendEvent(OperatorEvent evt, 
int targetSubtask)}}.
# Without strict promise from implementation of {{OperatorCoordinator}}, 
possibility to sending event from old incarnation after failed/reset will not 
be zero.

To achieve exactly once guarantee, we either have to enforce strict promise 
from implementation of {{OperatorCoordinator}} or we need to change the api a 
bit to my knowledge. I don't think strict promise enforcement to implementation 
is a good choice, it is just too fragile when there are 100 different 
implementations and hard to figure out where bad things happen.

For api changes, I draft followings:
* Drop {{Context.sendEvent(OperatorEvent evt, int targetSubtask)}}.
* Add {{void OperatorCoordinator#subtaskReady(int subtask, SubtaskContext 
context)}}. This will be called just before first event from operator.
* Main method of {{SubtaskContext}} is {{CompletableFuture 
sendEvent(OperatorEvent evt) throws TaskNotRunningException}}. This method will 
bind to single execution attempt. This guarantee that {{sendEvent}} will not 
mess up multiple runs of task. {{SubtaskContext}} could also extend from 
{{Context}}.
* Explicit restriction: operator coordinator will not be able to send event to 
operator instance before ready. I don't see any reason to send event first from 
coordinator.
* Optimization: quiesce {{SubtaskContext}} on both global/region failure and 
fail sending after quiesced.

Squash all to one: add subtask readiness to operator coordinator and bind 
sending with single execution attempt after ready.

h4. Other thoughts
Currently, to create operator coordinator on {{resetToCheckpoint}}, one has to 
extend {{RecreateOnResetOperatorCoordinator.Provider}}. It is a bit verbose and 
less explicit in api. I suggest to add tag interfaces to let runtime wrapping 
providers from client side automatically. This also mean these tag interfaces 
will be part of coordinator api. Personally, I think recreating coordinator on 
reset is a bit simple to use especially for complex coordinator. I guess it 
might be worth to propagate that in api.

[~sewen] [~gaoyunhaii] [~becket_qin] What do you think ? There are might be 
other approaches to solve this. Glad to hear.

> CoordinatorEventsExactlyOnceITCase.checkListContainsSequence fails on CI
> 
>
> Key: FLINK-18071
> URL: https://issues.apache.org/jira/browse/FLINK-18071
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.11.0
>Reporter: Robert Metzger
>Priority: Critical
>  Labels: test-stability
> Fix For: 1.11.4, 1.13.0
>
>
> CI: 
> https://dev.azure.com/georgeryan1322/Flink/_build/results?buildId=330=logs=6e58d712-c5cc-52fb-0895-6ff7bd56c46b=f30a8e80-b2cf-535c-9952-7f521a4ae374
> {code}
> [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 8.795 
> s <<< FAILURE! - in 
> org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase
> [ERROR] 
> test(org.apache.flink.runtime.operators.coordination.CoordinatorEventsExactlyOnceITCase)
>   Time elapsed: 4.647 s  <<< FAILURE!
> java.lang.AssertionError: List did not contain expected sequence of 200 
> elements, but was: [152, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 
> 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 
> 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 
> 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 

[jira] [Created] (FLINK-22108) Ephemeral socket address was checkpointed to state and restored back in CollectSinkOperatorCoordinator

2021-04-02 Thread Kezhu Wang (Jira)
Kezhu Wang created FLINK-22108:
--

 Summary: Ephemeral socket address was checkpointed to state and 
restored back in CollectSinkOperatorCoordinator
 Key: FLINK-22108
 URL: https://issues.apache.org/jira/browse/FLINK-22108
 Project: Flink
  Issue Type: Bug
  Components: API / DataStream
Affects Versions: 1.13.0
Reporter: Kezhu Wang


{{CollectSinkOperatorCoordinator}} checkpointed its {{address}} field to state 
and restored back. That field is listener address of {{CollectSinkFunction}}. 
After {{resetToCheckpoint}} (eg. global failover}}, {{address}} is meaningless. 
If client request comes before {{CollectSinkAddressEvent}}, it will use that 
meaningless address for connection. In best situation, error happens, and 
nothing hurts. In bad situation, no one knows where the restored address points 
to now. 

cc  [~TsReaper] [~ykt836]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-11838) Create RecoverableWriter for GCS

2021-04-02 Thread Galen Warren (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17314067#comment-17314067
 ] 

Galen Warren commented on FLINK-11838:
--

Hi [~xintongsong] – sorry for the long delay. I plan to create a new PR and 
upload some code this weekend, my other work has let up a bit and I have some 
time to look at this. I'll post the link here when it's ready.

> Create RecoverableWriter for GCS
> 
>
> Key: FLINK-11838
> URL: https://issues.apache.org/jira/browse/FLINK-11838
> Project: Flink
>  Issue Type: New Feature
>  Components: Connectors / FileSystem
>Affects Versions: 1.8.0
>Reporter: Fokko Driesprong
>Assignee: Galen Warren
>Priority: Major
>  Labels: pull-request-available, usability
> Fix For: 1.13.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> GCS supports the resumable upload which we can use to create a Recoverable 
> writer similar to the S3 implementation:
> https://cloud.google.com/storage/docs/json_api/v1/how-tos/resumable-upload
> After using the Hadoop compatible interface: 
> https://github.com/apache/flink/pull/7519
> We've noticed that the current implementation relies heavily on the renaming 
> of the files on the commit: 
> https://github.com/apache/flink/blob/master/flink-filesystems/flink-hadoop-fs/src/main/java/org/apache/flink/runtime/fs/hdfs/HadoopRecoverableFsDataOutputStream.java#L233-L259
> This is suboptimal on an object store such as GCS. Therefore we would like to 
> implement a more GCS native RecoverableWriter 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-803592550


   
   ## CI report:
   
   * 2ef40343abecf693b39b7b7fcb9f6d0d26bf82cd UNKNOWN
   * 4ea2496e67e1b5da116eac2058d51296e5b14edc Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16029)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-803592550


   
   ## CI report:
   
   * 2ef40343abecf693b39b7b7fcb9f6d0d26bf82cd UNKNOWN
   * b5e4eb916a93c6c81cd2cc6a133838e26ac062f9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16027)
 
   * 4ea2496e67e1b5da116eac2058d51296e5b14edc UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15483: [FLINK-22092][hive] Ignore static conf file URLs in HiveConf

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15483:
URL: https://github.com/apache/flink/pull/15483#issuecomment-812520117


   
   ## CI report:
   
   * 4aba800848a69d11dbbddf7afe2b3a87e95f8f87 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16026)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15441: [FLINK-22052][python] Add FLIP-142 public classes to python API

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15441:
URL: https://github.com/apache/flink/pull/15441#issuecomment-810695518


   
   ## CI report:
   
   * a4daf4b2b785878d576c7f759c5126d6e195a0c3 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16028)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812640029


   @fsk119 Added `FilterableSourceTest` and `FilterableSourceITCase`, waiting 
for your review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15466: [FLINK-22003][checkpointing] Prevent checkpoint from starting if any Source isn't running

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15466:
URL: https://github.com/apache/flink/pull/15466#issuecomment-811978896


   
   ## CI report:
   
   * ad1b080091b48a82188261ef9fab9c4229af4c9a Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16024)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-17957) Forbidden syntax "CREATE SYSTEM FUNCTION" for sql parser

2021-04-02 Thread WeiNan Zhao (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-17957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313938#comment-17313938
 ] 

WeiNan Zhao commented on FLINK-17957:
-

[~fsk119],please assgin this issue to to me , i will fix it.

> Forbidden syntax "CREATE SYSTEM FUNCTION" for sql parser
> 
>
> Key: FLINK-17957
> URL: https://issues.apache.org/jira/browse/FLINK-17957
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / API
>Affects Versions: 1.11.0
>Reporter: Danny Chen
>Assignee: Shengkai Fang
>Priority: Major
> Fix For: 1.13.0
>
>
> This syntax is invalid, but the parser still works.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Assigned] (FLINK-22003) UnalignedCheckpointITCase fail

2021-04-02 Thread Roman Khachatryan (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Roman Khachatryan reassigned FLINK-22003:
-

Assignee: Roman Khachatryan

> UnalignedCheckpointITCase fail
> --
>
> Key: FLINK-22003
> URL: https://issues.apache.org/jira/browse/FLINK-22003
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Assignee: Roman Khachatryan
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15601=logs=119bbba7-f5e3-5e08-e72d-09f1529665de=7dc1f5a9-54e1-502e-8b02-c7df69073cfc=4142
> {code:java}
> [ERROR] execute[parallel pipeline with remote channels, p = 
> 5](org.apache.flink.test.checkpointing.UnalignedCheckpointITCase)  Time 
> elapsed: 60.018 s  <<< ERROR!
> org.junit.runners.model.TestTimedOutException: test timed out after 6 
> milliseconds
>   at sun.misc.Unsafe.park(Native Method)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.CompletableFuture$Signaller.block(CompletableFuture.java:1707)
>   at 
> java.util.concurrent.ForkJoinPool.managedBlock(ForkJoinPool.java:3323)
>   at 
> java.util.concurrent.CompletableFuture.waitingGet(CompletableFuture.java:1742)
>   at 
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1859)
>   at 
> org.apache.flink.streaming.api.environment.LocalStreamEnvironment.execute(LocalStreamEnvironment.java:69)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1839)
>   at 
> org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.execute(StreamExecutionEnvironment.java:1822)
>   at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.execute(UnalignedCheckpointTestBase.java:138)
>   at 
> org.apache.flink.test.checkpointing.UnalignedCheckpointITCase.execute(UnalignedCheckpointITCase.java:184)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] zzfukai edited a comment on pull request #15054: [FLINK-13550][rest][ui] Vertex Flame Graph

2021-04-02 Thread GitBox


zzfukai edited a comment on pull request #15054:
URL: https://github.com/apache/flink/pull/15054#issuecomment-812486880


   Thank you Arvid! It works.
   By the way, I detected a bug that the popover content like 
`org.apache.flink.table.runtime.operators.deduplicate.ProcTimeDeduplicateKeepLastRowFunction.processElement:80
 (21.245%, 232 samples)` cannot disappear when the graph is refreshed and they 
are accmulated on top of the figure.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15482: [FLINK-22103][hive] Fix HiveModuleTest for 1.2.1

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15482:
URL: https://github.com/apache/flink/pull/15482#issuecomment-812447703


   
   ## CI report:
   
   * 2d9cd4f63f2e806f0a9a0466e4e6b6cbba02bd89 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16021)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15441: [FLINK-22052][python] Add FLIP-142 public classes to python API

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15441:
URL: https://github.com/apache/flink/pull/15441#issuecomment-810695518


   
   ## CI report:
   
   * 5f75c7d27817bd8e0822708aacab720ed1e2e394 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=15976)
 
   * a4daf4b2b785878d576c7f759c5126d6e195a0c3 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16028)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-803592550


   
   ## CI report:
   
   * 2ef40343abecf693b39b7b7fcb9f6d0d26bf82cd UNKNOWN
   * b5e4eb916a93c6c81cd2cc6a133838e26ac062f9 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16027)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21247) flink iceberg table map cannot convert to datastream

2021-04-02 Thread Jark Wu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313879#comment-17313879
 ] 

Jark Wu commented on FLINK-21247:
-

Thanks [~openinx], I changed the type to BUG, and fix version to 1.13. 

> flink iceberg table map cannot convert to datastream
> ---
>
> Key: FLINK-21247
> URL: https://issues.apache.org/jira/browse/FLINK-21247
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.12.1
> Environment: iceberg master
> flink 1.12
>  
>  
>Reporter: donglei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
> Attachments: image-2021-02-03-15-38-42-340.png, 
> image-2021-02-03-15-40-27-055.png, image-2021-02-03-15-41-34-426.png, 
> image-2021-02-03-15-43-19-919.png, image-2021-02-03-15-52-12-493.png, 
> image-2021-02-03-15-53-18-244.png
>
>
> Flink Iceberg Table with map
> !image-2021-02-03-15-38-42-340.png!
>  
> we want to read the table like this :
>  
> String querySql = "SELECT 
> ftime,extinfo,country,province,operator,apn,gw,src_ip_head,info_str,product_id,app_version,sdk_id,sdk_version,hardware_os,qua,upload_ip,client_ip,upload_apn,event_code,event_result,package_size,consume_time,event_value,event_time,upload_time,boundle_id,uin,platform,os_version,channel,brand,model
>  from bfzt3 ";
> Table table = tEnv.sqlQuery(querySql);
> DataStream sinkStream = tEnv.toAppendStream(table, 
> Types.POJO(AttaInfo.class, map));
> sinkStream.map(x->1).returns(Types.INT).keyBy(new 
> NullByteKeySelector()).reduce((x,y) -> {
>  return x+y;
> }).print();
>  
>  
> when read  we find a exception
>  
> 2021-02-03 15:37:57
> java.lang.ClassCastException: 
> org.apache.iceberg.flink.data.FlinkParquetReaders$ReusableMapData cannot be 
> cast to org.apache.flink.table.data.binary.BinaryMapData
> at 
> org.apache.flink.table.runtime.typeutils.MapDataSerializer.copy(MapDataSerializer.java:107)
> at 
> org.apache.flink.table.runtime.typeutils.MapDataSerializer.copy(MapDataSerializer.java:47)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:166)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:129)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:69)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
> at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:317)
> at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:411)
> at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
> at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
> at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66)
> at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:241)
>  
> we find that iceberg map is  ReusableMapData implements MapData 
> !image-2021-02-03-15-40-27-055.png!
>  
> this is the exception 
> !image-2021-02-03-15-41-34-426.png!
> MapData has two default implements  GenericMapData and BinaryMapData
> from iceberg implement is ReusableMapData
>  
> so i think that code should change to like this 
> !image-2021-02-03-15-43-19-919.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21247) flink iceberg table map cannot convert to datastream

2021-04-02 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-21247:

Issue Type: Bug  (was: New Feature)

> flink iceberg table map cannot convert to datastream
> ---
>
> Key: FLINK-21247
> URL: https://issues.apache.org/jira/browse/FLINK-21247
> Project: Flink
>  Issue Type: Bug
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.12.1
> Environment: iceberg master
> flink 1.12
>  
>  
>Reporter: donglei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
> Attachments: image-2021-02-03-15-38-42-340.png, 
> image-2021-02-03-15-40-27-055.png, image-2021-02-03-15-41-34-426.png, 
> image-2021-02-03-15-43-19-919.png, image-2021-02-03-15-52-12-493.png, 
> image-2021-02-03-15-53-18-244.png
>
>
> Flink Iceberg Table with map
> !image-2021-02-03-15-38-42-340.png!
>  
> we want to read the table like this :
>  
> String querySql = "SELECT 
> ftime,extinfo,country,province,operator,apn,gw,src_ip_head,info_str,product_id,app_version,sdk_id,sdk_version,hardware_os,qua,upload_ip,client_ip,upload_apn,event_code,event_result,package_size,consume_time,event_value,event_time,upload_time,boundle_id,uin,platform,os_version,channel,brand,model
>  from bfzt3 ";
> Table table = tEnv.sqlQuery(querySql);
> DataStream sinkStream = tEnv.toAppendStream(table, 
> Types.POJO(AttaInfo.class, map));
> sinkStream.map(x->1).returns(Types.INT).keyBy(new 
> NullByteKeySelector()).reduce((x,y) -> {
>  return x+y;
> }).print();
>  
>  
> when read  we find a exception
>  
> 2021-02-03 15:37:57
> java.lang.ClassCastException: 
> org.apache.iceberg.flink.data.FlinkParquetReaders$ReusableMapData cannot be 
> cast to org.apache.flink.table.data.binary.BinaryMapData
> at 
> org.apache.flink.table.runtime.typeutils.MapDataSerializer.copy(MapDataSerializer.java:107)
> at 
> org.apache.flink.table.runtime.typeutils.MapDataSerializer.copy(MapDataSerializer.java:47)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:166)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:129)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:69)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
> at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:317)
> at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:411)
> at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
> at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
> at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66)
> at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:241)
>  
> we find that iceberg map is  ReusableMapData implements MapData 
> !image-2021-02-03-15-40-27-055.png!
>  
> this is the exception 
> !image-2021-02-03-15-41-34-426.png!
> MapData has two default implements  GenericMapData and BinaryMapData
> from iceberg implement is ReusableMapData
>  
> so i think that code should change to like this 
> !image-2021-02-03-15-43-19-919.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-21247) flink iceberg table map cannot convert to datastream

2021-04-02 Thread Jark Wu (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jark Wu updated FLINK-21247:

Fix Version/s: 1.13.0

> flink iceberg table map cannot convert to datastream
> ---
>
> Key: FLINK-21247
> URL: https://issues.apache.org/jira/browse/FLINK-21247
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.12.1
> Environment: iceberg master
> flink 1.12
>  
>  
>Reporter: donglei
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
> Attachments: image-2021-02-03-15-38-42-340.png, 
> image-2021-02-03-15-40-27-055.png, image-2021-02-03-15-41-34-426.png, 
> image-2021-02-03-15-43-19-919.png, image-2021-02-03-15-52-12-493.png, 
> image-2021-02-03-15-53-18-244.png
>
>
> Flink Iceberg Table with map
> !image-2021-02-03-15-38-42-340.png!
>  
> we want to read the table like this :
>  
> String querySql = "SELECT 
> ftime,extinfo,country,province,operator,apn,gw,src_ip_head,info_str,product_id,app_version,sdk_id,sdk_version,hardware_os,qua,upload_ip,client_ip,upload_apn,event_code,event_result,package_size,consume_time,event_value,event_time,upload_time,boundle_id,uin,platform,os_version,channel,brand,model
>  from bfzt3 ";
> Table table = tEnv.sqlQuery(querySql);
> DataStream sinkStream = tEnv.toAppendStream(table, 
> Types.POJO(AttaInfo.class, map));
> sinkStream.map(x->1).returns(Types.INT).keyBy(new 
> NullByteKeySelector()).reduce((x,y) -> {
>  return x+y;
> }).print();
>  
>  
> when read  we find a exception
>  
> 2021-02-03 15:37:57
> java.lang.ClassCastException: 
> org.apache.iceberg.flink.data.FlinkParquetReaders$ReusableMapData cannot be 
> cast to org.apache.flink.table.data.binary.BinaryMapData
> at 
> org.apache.flink.table.runtime.typeutils.MapDataSerializer.copy(MapDataSerializer.java:107)
> at 
> org.apache.flink.table.runtime.typeutils.MapDataSerializer.copy(MapDataSerializer.java:47)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:166)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:129)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:69)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
> at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:317)
> at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:411)
> at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
> at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
> at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66)
> at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:241)
>  
> we find that iceberg map is  ReusableMapData implements MapData 
> !image-2021-02-03-15-40-27-055.png!
>  
> this is the exception 
> !image-2021-02-03-15-41-34-426.png!
> MapData has two default implements  GenericMapData and BinaryMapData
> from iceberg implement is ReusableMapData
>  
> so i think that code should change to like this 
> !image-2021-02-03-15-43-19-919.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15483: [FLINK-22092][hive] Ignore static conf file URLs in HiveConf

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15483:
URL: https://github.com/apache/flink/pull/15483#issuecomment-812520117


   
   ## CI report:
   
   * 4aba800848a69d11dbbddf7afe2b3a87e95f8f87 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16026)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15441: [FLINK-22052][python] Add FLIP-142 public classes to python API

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15441:
URL: https://github.com/apache/flink/pull/15441#issuecomment-810695518


   
   ## CI report:
   
   * 5f75c7d27817bd8e0822708aacab720ed1e2e394 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=15976)
 
   * a4daf4b2b785878d576c7f759c5126d6e195a0c3 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15425: [FLINK-21133][connector/checkpoint] Fix the stop-with-savepoint case …

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15425:
URL: https://github.com/apache/flink/pull/15425#issuecomment-809890369


   
   ## CI report:
   
   * 1b248e49e4695b3ac7d21320a377fb74ff460687 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16020)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16003)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15483: [FLINK-22092][hive] Ignore static conf file URLs in HiveConf

2021-04-02 Thread GitBox


flinkbot commented on pull request #15483:
URL: https://github.com/apache/flink/pull/15483#issuecomment-812520117


   
   ## CI report:
   
   * 4aba800848a69d11dbbddf7afe2b3a87e95f8f87 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-803592550


   
   ## CI report:
   
   * 2ef40343abecf693b39b7b7fcb9f6d0d26bf82cd UNKNOWN
   * dcfc0b3d5083c0b2eaf95b0c5ea0e11baeeb4eed Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16025)
 
   * b5e4eb916a93c6c81cd2cc6a133838e26ac062f9 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812514484


   Please take a look at #13449
   
   I think SourceWatermarkTest and SourceWatermarkITCase are good example.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15483: [FLINK-22092][hive] Ignore static conf file URLs in HiveConf

2021-04-02 Thread GitBox


flinkbot commented on pull request #15483:
URL: https://github.com/apache/flink/pull/15483#issuecomment-812514496


   Thanks a lot for your contribution to the Apache Flink project. I'm the 
@flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress 
of the review.
   
   
   ## Automated Checks
   Last check on commit 4aba800848a69d11dbbddf7afe2b3a87e95f8f87 (Fri Apr 02 
12:45:39 UTC 2021)
   
   **Warnings:**
* No documentation files were touched! Remember to keep the Flink docs up 
to date!
   
   
   Mention the bot in a comment to re-run the automated checks.
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review 
Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full 
explanation of the review process.
The Bot is tracking the review progress through labels. Labels are applied 
according to the order of the review items. For consensus, approval by a Flink 
committer of PMC member is required Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot approve description` to approve one or more aspects (aspects: 
`description`, `consensus`, `architecture` and `quality`)
- `@flinkbot approve all` to approve all aspects
- `@flinkbot approve-until architecture` to approve everything until 
`architecture`
- `@flinkbot attention @username1 [@username2 ..]` to require somebody's 
attention
- `@flinkbot disapprove architecture` to remove an approval you gave earlier
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Created] (FLINK-22107) Include antlr into hive connector uber jars

2021-04-02 Thread Rui Li (Jira)
Rui Li created FLINK-22107:
--

 Summary: Include antlr into hive connector uber jars
 Key: FLINK-22107
 URL: https://issues.apache.org/jira/browse/FLINK-22107
 Project: Flink
  Issue Type: Sub-task
  Components: Connectors / Hive
Reporter: Rui Li
 Fix For: 1.13.0






--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22092) Prevent HiveCatalog from reading hive-site in classpath

2021-04-02 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22092?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated FLINK-22092:
---
Labels: pull-request-available  (was: )

> Prevent HiveCatalog from reading hive-site in classpath
> ---
>
> Key: FLINK-22092
> URL: https://issues.apache.org/jira/browse/FLINK-22092
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
>  Labels: pull-request-available
> Fix For: 1.13.0
>
>
> It turns out that FLINK-19702 is incomplete and {{HiveConf}} may still 
> automatically load hive-site from classpath and set {{hiveSiteURL}} variable. 
> This can cause problems, e.g. create a HiveCatalog reading hive-site from 
> classpath, drop this catalog and also remove the hive-site file, create 
> another HiveCatalog with hive-conf-dir pointing to another location, the 2nd 
> HiveCatalog cannot be created because {{HiveConf}} has remembered the 
> hive-site location from the previous one and complains the file can no longer 
> be found.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] lirui-apache opened a new pull request #15483: [FLINK-22092][hive] Ignore static conf file URLs in HiveConf

2021-04-02 Thread GitBox


lirui-apache opened a new pull request #15483:
URL: https://github.com/apache/flink/pull/15483


   
   
   ## What is the purpose of the change
   
   `HiveConf` automatically detects hive-site from classpath and stores the URL 
into static variable `hiveSiteURL`. This can cause problems if a user creates 
multiple HiveCatalog instances.
   
   
   ## Brief change log
   
 - Prevent HiveConf from storing `hiveSiteURL`. And look for hive-site from 
classpath when hive-conf-dir is not specified by ourselves.
 - Add a test case
   
   
   ## Verifying this change
   
   Existing and added test cases
   
   ## Does this pull request potentially affect one of the following parts:
   
 - Dependencies (does it add or upgrade a dependency): no
 - The public API, i.e., is any changed class annotated with 
`@Public(Evolving)`: no
 - The serializers: no
 - The runtime per-record code paths (performance sensitive): no
 - Anything that affects deployment or recovery: JobManager (and its 
components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
 - The S3 file system connector: no
   
   ## Documentation
   
 - Does this pull request introduce a new feature? no
 - If yes, how is the feature documented? NA
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812512544


   @fsk119 I'm looking at 
`org.apache.flink.table.planner.plan.stream.sql.CalcTest` and all I see is 
basic tests there over table source, what do you think should be tested there? 
The source doesn't seem to have predicate pushdown support.
   
   If we're looking for the validity off the plan, it already does a logical 
filter in one of the tests.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812512544


   @fsk119 I'm looking at 
`org.apache.flink.table.planner.plan.stream.sql.CalcTest` and all I see is 
basic tests there over table source, what do you think should be tested there? 
The source doesn't seem to have predicate pushdown support.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-22092) Prevent HiveCatalog from reading hive-site in classpath

2021-04-02 Thread Rui Li (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313853#comment-17313853
 ] 

Rui Li commented on FLINK-22092:


Put more thoughts into this and I guess it's not a good idea to stop reading 
hive-site from classpath because that may break existing use cases. Instead, we 
can just prevent HiveConf from reading hive-site from the {{hiveSiteURL}} 
variable.

> Prevent HiveCatalog from reading hive-site in classpath
> ---
>
> Key: FLINK-22092
> URL: https://issues.apache.org/jira/browse/FLINK-22092
> Project: Flink
>  Issue Type: Bug
>  Components: Connectors / Hive
>Reporter: Rui Li
>Assignee: Rui Li
>Priority: Major
> Fix For: 1.13.0
>
>
> It turns out that FLINK-19702 is incomplete and {{HiveConf}} may still 
> automatically load hive-site from classpath and set {{hiveSiteURL}} variable. 
> This can cause problems, e.g. create a HiveCatalog reading hive-site from 
> classpath, drop this catalog and also remove the hive-site file, create 
> another HiveCatalog with hive-conf-dir pointing to another location, the 2nd 
> HiveCatalog cannot be created because {{HiveConf}} has remembered the 
> hive-site location from the previous one and complains the file can no longer 
> be found.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-803592550


   
   ## CI report:
   
   * f4e94916e5c597341059d98bbff2b37d23cc4203 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=15133)
 
   * 2ef40343abecf693b39b7b7fcb9f6d0d26bf82cd UNKNOWN
   * dcfc0b3d5083c0b2eaf95b0c5ea0e11baeeb4eed UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812509449


   > > OK, I see what you're saying. Is there a reason though that the 
`FlinkLogicalCalc` is being generated twice?
   > 
   > In optimization, we alway build a new node rather than reuse the origin 
node. @godfreyhe is more familar with this.
   > 
   > > Do you think we can get this into 1.13? This is a critical fix for me.
   > 
   > I am not sure about this... It more depends on you.
   
   OK, I'm waiting on your comments so we can get this finalized.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812507579


   > 
   > OK, I see what you're saying. Is there a reason though that the 
`FlinkLogicalCalc` is being generated twice?
   
   In optimization, we alway build a new node rather than reuse the origin 
node. @godfreyhe is more familar with this.
   
   > 
   > Do you think we can get this into 1.13? This is a critical fix for me.
   
   I am not sure about this... It more depends on you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-803592550


   
   ## CI report:
   
   * f4e94916e5c597341059d98bbff2b37d23cc4203 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=15133)
 
   * 2ef40343abecf693b39b7b7fcb9f6d0d26bf82cd UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812504803


   > > @YuvalItzchakov . I think the added test is not enough to cover all.
   > > I think you should also need to add end to end Tests in the 
`org.apache.flink.table.planner.runtime.stream.sql.CalcITCase` and 
`org.apache.flink.table.planner.plan.stream.sql.CalcTest` to verify the added 
rule doesn't have conflcts with others.
   > 
   > The test is not enough in my code. Please add more tests like I mentioned 
before.
   
   On it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812503994


   > > @YuvalItzchakov . I think the added test is not enough to cover all.
   > > I think you should also need to add end to end Tests in the 
`org.apache.flink.table.planner.runtime.stream.sql.CalcITCase` and 
`org.apache.flink.table.planner.plan.stream.sql.CalcTest` to verify the added 
rule doesn't have conflcts with others.
   > 
   > The test is not enough in my code. Please add more tests like I mentioned 
before.
   
   OK, I see what you're saying. Is there a reason though that the 
`FlinkLogicalCalc` is being generated twice?
   
   Do you think we can get this into 1.13? This is a critical fix for me.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812503994


   > > @YuvalItzchakov . I think the added test is not enough to cover all.
   > > I think you should also need to add end to end Tests in the 
`org.apache.flink.table.planner.runtime.stream.sql.CalcITCase` and 
`org.apache.flink.table.planner.plan.stream.sql.CalcTest` to verify the added 
rule doesn't have conflcts with others.
   > 
   > The test is not enough in my code. Please add more tests like I mentioned 
before.
   
   OK, I see what you're saying.
   
   Do you think we can get this into 1.13? This is a critical fix for me.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812503759


   > @YuvalItzchakov . I think the added test is not enough to cover all.
   > 
   > I think you should also need to add end to end Tests in the 
`org.apache.flink.table.planner.runtime.stream.sql.CalcITCase` and 
`org.apache.flink.table.planner.plan.stream.sql.CalcTest` to verify the added 
rule doesn't have conflcts with others.
   
   The test is not enough in my code. Please add more tests like I mentioned 
before. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812502909


   > @fsk119 After the change, my partial predicate match no longer works:
   > 
   > ```java
   > @Test
   > public void testPushdownAcrossWatermarkPartialPredicateMatch() {
   > String ddl3 = "CREATE TABLE WithWatermark ("
   > + "  name STRING,\n"
   > + "  event_time TIMESTAMP(3),\n"
   > + "  WATERMARK FOR event_time as event_time - INTERVAL '5' 
SECOND"
   > + ") WITH (\n"
   > + " 'connector' = 'values',\n"
   > + " 'bounded' = 'true',\n"
   > + " 'filterable-fields' = 'name',\n"
   > + " 'enable-watermark-push-down' = 'false',\n"
   > + " 'disable-lookup' = 'true'"
   > + ")";
   > 
   > util.tableEnv().executeSql(ddl3);
   > util.verifyRelPlan(
   > "SELECT * FROM WithWatermark WHERE LOWER(name) = 'foo' AND 
name IS NOT NULL");
   > }
   > ```
   > 
   > Expected:
   > 
   > ```
   > Calc(select=[name, event_time], where=[IS NOT NULL(name)])
   > +- WatermarkAssigner(rowtime=[event_time], watermark=[-(event_time, 
5000:INTERVAL SECOND)])
   >+- TableSourceScan(table=[[default_catalog, default_database, 
WithWatermark, filter=[equals(lower(name), 'foo')]]], fields=[name, event_time])
   > ```
   > 
   > Actual:
   > 
   > ```
   > FlinkLogicalCalc(select=[name, event_time])
   > +- FlinkLogicalCalc(select=[name, event_time], where=[AND(=(LOWER(name), 
_UTF-16LE'foo':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"), IS NOT 
NULL(name))])
   >+- FlinkLogicalWatermarkAssigner(rowtime=[event_time], watermark=[-($1, 
5000:INTERVAL SECOND)])
   >   +- FlinkLogicalTableSourceScan(table=[[default_catalog, 
default_database, WithWatermark]], fields=[name, event_time])
   > ```
   > 
   > Partial match is not being pushed down.
   
   We shouldn't push down the filter if watermark assigner exists. If we push 
down the filter without watermark assigner push down, the scan will only emit 
the records meet the condition.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812502234


   > @fsk119 Pushed the changes after your comments. Can you take a look?
   
   Sure. I will take a look tomorrow. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812499504


   @fsk119 After the change, my partial predicate match no longer works:
   
   ```java
   @Test
   public void testPushdownAcrossWatermarkPartialPredicateMatch() {
   String ddl3 = "CREATE TABLE WithWatermark ("
   + "  name STRING,\n"
   + "  event_time TIMESTAMP(3),\n"
   + "  WATERMARK FOR event_time as event_time - INTERVAL '5' 
SECOND"
   + ") WITH (\n"
   + " 'connector' = 'values',\n"
   + " 'bounded' = 'true',\n"
   + " 'filterable-fields' = 'name',\n"
   + " 'enable-watermark-push-down' = 'false',\n"
   + " 'disable-lookup' = 'true'"
   + ")";
   
   util.tableEnv().executeSql(ddl3);
   util.verifyRelPlan(
   "SELECT * FROM WithWatermark WHERE LOWER(name) = 'foo' AND 
name IS NOT NULL");
   }
   ```
   
   Expected:
   
   ```
   Calc(select=[name, event_time], where=[IS NOT NULL(name)])
   +- WatermarkAssigner(rowtime=[event_time], watermark=[-(event_time, 
5000:INTERVAL SECOND)])
  +- TableSourceScan(table=[[default_catalog, default_database, 
WithWatermark, filter=[equals(lower(name), 'foo')]]], fields=[name, event_time])
   ```
   
   Actual:
   ```
   FlinkLogicalCalc(select=[name, event_time])
   +- FlinkLogicalCalc(select=[name, event_time], where=[AND(=(LOWER(name), 
_UTF-16LE'foo':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"), IS NOT 
NULL(name))])
  +- FlinkLogicalWatermarkAssigner(rowtime=[event_time], watermark=[-($1, 
5000:INTERVAL SECOND)])
 +- FlinkLogicalTableSourceScan(table=[[default_catalog, 
default_database, WithWatermark]], fields=[name, event_time])
   ```
   
   Partial match is not being pushed down.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812499504


   @fsk119 After the change, my partial predicate match no longer works:
   
   ```java
   @Test
   public void testPushdownAcrossWatermarkPartialPredicateMatch() {
   String ddl3 = "CREATE TABLE WithWatermark ("
   + "  name STRING,\n"
   + "  event_time TIMESTAMP(3),\n"
   + "  WATERMARK FOR event_time as event_time - INTERVAL '5' 
SECOND"
   + ") WITH (\n"
   + " 'connector' = 'values',\n"
   + " 'bounded' = 'true',\n"
   + " 'filterable-fields' = 'name',\n"
   + " 'enable-watermark-push-down' = 'false',\n"
   + " 'disable-lookup' = 'true'"
   + ")";
   
   util.tableEnv().executeSql(ddl3);
   util.verifyRelPlan(
   "SELECT * FROM WithWatermark WHERE LOWER(name) = 'foo' AND 
name IS NOT NULL");
   }
   ```
   
   Expected:
   
   ```
   Calc(select=[name, event_time], where=[IS NOT NULL(name)])
   +- WatermarkAssigner(rowtime=[event_time], watermark=[-(event_time, 
5000:INTERVAL SECOND)])
  +- TableSourceScan(table=[[default_catalog, default_database, 
WithWatermark, filter=[equals(lower(name), 'foo')]]], fields=[name, event_time])
   ```
   
   Actual:
   ```
   FlinkLogicalCalc(select=[name, event_time])
   +- FlinkLogicalCalc(select=[name, event_time], where=[AND(=(LOWER(name), 
_UTF-16LE'foo':VARCHAR(2147483647) CHARACTER SET "UTF-16LE"), IS NOT 
NULL(name))])
  +- FlinkLogicalWatermarkAssigner(rowtime=[event_time], watermark=[-($1, 
5000:INTERVAL SECOND)])
 +- FlinkLogicalTableSourceScan(table=[[default_catalog, 
default_database, WithWatermark]], fields=[name, event_time])
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15480: [FLINK-22006][k8s] Support to configure max concurrent requests for fabric8 Kubernetes client via JAVA opts or envs

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15480:
URL: https://github.com/apache/flink/pull/15480#issuecomment-812378052


   
   ## CI report:
   
   * 8c7eca17544f157b58f4100af3527e346a63bbcb Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16016)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15466: [FLINK-22003][checkpointing] Prevent checkpoint from starting if any Source isn't running

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15466:
URL: https://github.com/apache/flink/pull/15466#issuecomment-811978896


   
   ## CI report:
   
   * f013abdc90d9b504720c1c94c1d847e144d38a7c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=15983)
 
   * ad1b080091b48a82188261ef9fab9c4229af4c9a Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16024)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812497261


   @fsk119 Pushed the changes after your comments. Can you take a look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812484309


   > @fsk119 I'm confused with all the suggestions. Should I keep working off 
this branch but have this as a logical rewrite instead? Are all the comments 
you left above still valid?
   > 
   > If I instead take the code implemented by your PR, there's no need for the 
reusable base class here, so I want to know which one it is :)
   
   hi. @YuvalItzchakov . Please ignore the comments about code. 
   
   I think you should take the code from my PR. I think optimization cannot 
break semantics but the current implementation does break the semantic. It will 
generate watermark for the records that meet the conditions .
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812492176


   > > > @fsk119 I see you also added a test for partition pushdown, are those 
related?
   > > 
   > > 
   > > Not related. I just try to find why the change breaks the test.
   > > It will try to apply filter push down first and then apply partition 
push down in the origin test and add `filter = []` into the digest. But the 
change in the PR will delay the filter push down, which means it will first 
apply partition push down and then filter push down. After the partition push 
down applies, we don't need to push the filter any more, which causes the 
disgest is different with before.
   > 
   > Does it still break after the new change you proposed?
   
   Yes. You can pull the code and run the unit test about this.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812492176


   > > > @fsk119 I see you also added a test for partition pushdown, are those 
related?
   > > 
   > > 
   > > Not related. I just try to find why the change breaks the test.
   > > It will try to apply filter push down first and then apply partition 
push down in the origin test and add `filter = []` into the digest. But the 
change in the PR will delay the filter push down, which means it will first 
apply partition push down and then filter push down. After the partition push 
down applies, we don't need to push the filter any more, which causes the 
disgest is different with before.
   > 
   > Does it still break after the new change you proposed?
   
   Yes. You can pull the code, go to the directy 
flink-table/flink-table-planner-blink and use the 
   ```
   mvn verify
   ```
   to test in local environment.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812490603


   > > @fsk119 I see you also added a test for partition pushdown, are those 
related?
   > 
   > Not related. I just try to find why the change breaks the test.
   > 
   > It will try to apply filter push down first and then apply partition push 
down in the origin test and add `filter = []` into the digest. But the change 
in the PR will delay the filter push down, which means it will first apply 
partition push down and then filter push down. After the partition push down 
applies, we don't need to push the filter any more, which causes the disgest is 
different with before.
   
   Does it still break after the new change you proposed?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812490603


   > > @fsk119 I see you also added a test for partition pushdown, are those 
related?
   > 
   > Not related. I just try to find why the change breaks the test.
   > 
   > It will try to apply filter push down first and then apply partition push 
down in the origin test and add `filter = []` into the digest. But the change 
in the PR will delay the filter push down, which means it will first apply 
partition push down and then filter push down. After the partition push down 
applies, we don't need to push the filter any more, which causes the disgest is 
different with before.
   
   Does it still break after the change?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15466: [FLINK-22003][checkpointing] Prevent checkpoint from starting if any Source isn't running

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15466:
URL: https://github.com/apache/flink/pull/15466#issuecomment-811978896


   
   ## CI report:
   
   * f013abdc90d9b504720c1c94c1d847e144d38a7c Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=15983)
 
   * ad1b080091b48a82188261ef9fab9c4229af4c9a UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812488407


   > @fsk119 I see you also added a test for partition pushdown, are those 
related?
   
   Not related. I just try to find why the change breaks the test. 
   
   It will try to apply filter push down first and then apply partition push 
down in the origin test and add `filter = []` into the digest. But the change 
in the PR will delay the filter push down, which means it will first apply 
partition push down and then filter push down. After the partition push down 
applies, we don't need to push the filter any more, which causes the disgest is 
different with before. 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] zzfukai commented on pull request #15054: [FLINK-13550][rest][ui] Vertex Flame Graph

2021-04-02 Thread GitBox


zzfukai commented on pull request #15054:
URL: https://github.com/apache/flink/pull/15054#issuecomment-812486880


   Thank you Arvid! It works.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812485639


   @fsk119 Awesome, OK. Fixes coming in a bit.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812484309


   > @fsk119 I'm confused with all the suggestions. Should I keep working off 
this branch but have this as a logical rewrite instead? Are all the comments 
you left above still valid?
   > 
   > If I instead take the code implemented by your PR, there's no need for the 
reusable base class here, so I want to know which one it is :)
   
   hi. @YuvalItzchakov . Please ignore the comments about code. 
   
   I think you should take the code from my PR. I think optimization cannot 
break semantics but the current implementation does not. It will generate 
watermark for the records that meet the conditions .
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812484309


   > @fsk119 I'm confused with all the suggestions. Should I keep working off 
this branch but have this as a logical rewrite instead? Are all the comments 
you left above still valid?
   > 
   > If I instead take the code implemented by your PR, there's no need for the 
reusable base class here, so I want to know which one it is :)
   
   hi. @YuvalItzchakov . Please ignore the comments about code. 
   
   I think you should take the code from my PR. I think optimization cannot 
break semantics but the current implementation does. It will generate watermark 
for the records that meet the conditions .
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812484309


   
   > @fsk119 I'm confused with all the suggestions. Should I keep working off 
this branch but have this as a logical rewrite instead? Are all the comments 
you left above still valid?
   > 
   > If I instead take the code implemented by your PR, there's no need for the 
reusable base class here, so I want to know which one it is :)
   
   hi. @YuvalItzchakov . Please ignore the comments about code. 
   
   I think you should take the code from my PR. I think optimization cannot 
break semantics but the current implementation doesn't. It will generate 
watermark for the records that meet the conditions.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15479: [FLINK-19606][table-runtime-blink] Implement streaming window join operator

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15479:
URL: https://github.com/apache/flink/pull/15479#issuecomment-812344008


   
   ## CI report:
   
   * 93b9b3d8dac2c3ecd13e3e3ede46f4a73263e34d Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16015)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15477: [FLINK-22099][table-planner-blink] Fix bug for semi/anti window join.

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15477:
URL: https://github.com/apache/flink/pull/15477#issuecomment-812337208


   
   ## CI report:
   
   * 68bacb55c3c718eddba8cf5596bf94c3e4931d40 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16011)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15396: [FLINK-21008][coordination] Register a shutdown supplier in the SignalHandler for ClusterEntrypoint

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15396:
URL: https://github.com/apache/flink/pull/15396#issuecomment-808861897


   
   ## CI report:
   
   * 5e152ddd93611a57bf3659bd1c46967e46a9e4c2 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16014)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Resolved] (FLINK-22083) Python tests fail on azure

2021-04-02 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo resolved FLINK-22083.
--
Resolution: Fixed

> Python tests fail on azure
> --
>
> Key: FLINK-22083
> URL: https://issues.apache.org/jira/browse/FLINK-22083
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15934=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=23503
> {code}
> E at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> E at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> E at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> E at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> E at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> E at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> E at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> E at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> E at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> E at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> E at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> E at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> E at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> E at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> E ... 4 more
> E   Caused by: java.lang.Exception: The user defined 
> 'open(Configuration)' method in class 
> org.apache.flink.table.runtime.functions.python.PythonTableFunctionFlatMap 
> caused an exception: Failed to create stage bundle factory! 
> INFO:root:Initializing python harness: 
> /__w/1/s/flink-python/pyflink/fn_execution/beam/beam_boot.py --id=313-1 
> --provision_endpoint=localhost:42447
> E   
> E at 
> org.apache.flink.runtime.operators.BatchTask.openUserCode(BatchTask.java:1499)
> E at 
> org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.openTask(ChainedFlatMapDriver.java:47)
> E at 
> org.apache.flink.runtime.operators.BatchTask.openChainedTasks(BatchTask.java:1541)
> E at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:164)
> E at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:776)
> E at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
> E at java.lang.Thread.run(Thread.java:748)
> E   Caused by: java.lang.RuntimeException: Failed to create 
> stage bundle factory! INFO:root:Initializing python harness: 
> /__w/1/s/flink-python/pyflink/fn_execution/beam/beam_boot.py --id=313-1 
> --provision_endpoint=localhost:42447
> E   
> E at 
> org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:429)
> E at 
> org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.open(BeamPythonFunctionRunner.java:279)
> E at 
> org.apache.flink.table.runtime.functions.python.AbstractPythonStatelessFunctionFlatMap.open(AbstractPythonStatelessFunctionFlatMap.java:188)
> E at 
> org.apache.flink.table.runtime.functions.python.PythonTableFunctionFlatMap.open(PythonTableFunctionFlatMap.java:84)
> E at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
> E at 
> org.apache.flink.runtime.operators.BatchTask.openUserCode(BatchTask.java:1493)
> E ... 6 more
> E   Caused by: 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException:
>  java.lang.IllegalStateException: Process died with exit code 0
> E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
> E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache.get(LocalCache.java:3952)
> E at 
> 

[jira] [Resolved] (FLINK-22101) PyFlinkBatchUserDefinedTableFunctionTests fail

2021-04-02 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo resolved FLINK-22101.
--
Resolution: Fixed

This issue should have been addressed in 
[FLINK-22076|https://issues.apache.org/jira/browse/FLINK-22076]. Feel free to 
reopen it if it still happens.

> PyFlinkBatchUserDefinedTableFunctionTests fail
> --
>
> Key: FLINK-22101
> URL: https://issues.apache.org/jira/browse/FLINK-22101
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.0
>Reporter: Guowei Ma
>Priority: Major
>  Labels: test-stability
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15996=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=22771



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (FLINK-22083) Python tests fail on azure

2021-04-02 Thread Huang Xingbo (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-22083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313795#comment-17313795
 ] 

Huang Xingbo commented on FLINK-22083:
--

This issue should have been addressed in 
[FLINK-22076|https://issues.apache.org/jira/browse/FLINK-22076]. Feel free to 
reopen it if it still happens.

> Python tests fail on azure
> --
>
> Key: FLINK-22083
> URL: https://issues.apache.org/jira/browse/FLINK-22083
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.0
>Reporter: Dawid Wysakowicz
>Priority: Major
>  Labels: test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=15934=logs=821b528f-1eed-5598-a3b4-7f748b13f261=4fad9527-b9a5-5015-1b70-8356e5c91490=23503
> {code}
> E at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
> E at 
> akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
> E at 
> scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
> E at 
> akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
> E at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
> E at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> E at 
> scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
> E at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
> E at 
> akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
> E at 
> akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
> E at akka.actor.ActorCell.invoke(ActorCell.scala:561)
> E at 
> akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
> E at akka.dispatch.Mailbox.run(Mailbox.scala:225)
> E at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
> E ... 4 more
> E   Caused by: java.lang.Exception: The user defined 
> 'open(Configuration)' method in class 
> org.apache.flink.table.runtime.functions.python.PythonTableFunctionFlatMap 
> caused an exception: Failed to create stage bundle factory! 
> INFO:root:Initializing python harness: 
> /__w/1/s/flink-python/pyflink/fn_execution/beam/beam_boot.py --id=313-1 
> --provision_endpoint=localhost:42447
> E   
> E at 
> org.apache.flink.runtime.operators.BatchTask.openUserCode(BatchTask.java:1499)
> E at 
> org.apache.flink.runtime.operators.chaining.ChainedFlatMapDriver.openTask(ChainedFlatMapDriver.java:47)
> E at 
> org.apache.flink.runtime.operators.BatchTask.openChainedTasks(BatchTask.java:1541)
> E at 
> org.apache.flink.runtime.operators.DataSourceTask.invoke(DataSourceTask.java:164)
> E at 
> org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:776)
> E at 
> org.apache.flink.runtime.taskmanager.Task.run(Task.java:563)
> E at java.lang.Thread.run(Thread.java:748)
> E   Caused by: java.lang.RuntimeException: Failed to create 
> stage bundle factory! INFO:root:Initializing python harness: 
> /__w/1/s/flink-python/pyflink/fn_execution/beam/beam_boot.py --id=313-1 
> --provision_endpoint=localhost:42447
> E   
> E at 
> org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.createStageBundleFactory(BeamPythonFunctionRunner.java:429)
> E at 
> org.apache.flink.streaming.api.runners.python.beam.BeamPythonFunctionRunner.open(BeamPythonFunctionRunner.java:279)
> E at 
> org.apache.flink.table.runtime.functions.python.AbstractPythonStatelessFunctionFlatMap.open(AbstractPythonStatelessFunctionFlatMap.java:188)
> E at 
> org.apache.flink.table.runtime.functions.python.PythonTableFunctionFlatMap.open(PythonTableFunctionFlatMap.java:84)
> E at 
> org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:34)
> E at 
> org.apache.flink.runtime.operators.BatchTask.openUserCode(BatchTask.java:1493)
> E ... 6 more
> E   Caused by: 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.util.concurrent.UncheckedExecutionException:
>  java.lang.IllegalStateException: Process died with exit code 0
> E at 
> org.apache.beam.vendor.guava.v26_0_jre.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2050)
> E at 
> 

[jira] [Updated] (FLINK-22076) Python Test failed with "OSError: [Errno 12] Cannot allocate memory"

2021-04-02 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo updated FLINK-22076:
-
Fix Version/s: 1.13.0

> Python Test failed with "OSError: [Errno 12] Cannot allocate memory"
> 
>
> Key: FLINK-22076
> URL: https://issues.apache.org/jira/browse/FLINK-22076
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.0
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0
>
>
> https://dev.azure.com/sewen0794/Flink/_build/results?buildId=249=logs=fba17979-6d2e-591d-72f1-97cf42797c11=443dc6bf-b240-56df-6acf-c882d4b238da=21533
> Python Test failed with "OSError: [Errno 12] Cannot allocate memory" in Azure 
> Pipeline. I am not sure if it is caused by insufficient machine memory on 
> Azure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (FLINK-22076) Python Test failed with "OSError: [Errno 12] Cannot allocate memory"

2021-04-02 Thread Huang Xingbo (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Huang Xingbo resolved FLINK-22076.
--
Resolution: Fixed

Merged into master via 000c69dcfb9632c152a45a7971bdef7e3ef0556e

> Python Test failed with "OSError: [Errno 12] Cannot allocate memory"
> 
>
> Key: FLINK-22076
> URL: https://issues.apache.org/jira/browse/FLINK-22076
> Project: Flink
>  Issue Type: Bug
>  Components: API / Python
>Affects Versions: 1.13.0
>Reporter: Huang Xingbo
>Assignee: Huang Xingbo
>Priority: Major
>  Labels: pull-request-available, test-stability
>
> https://dev.azure.com/sewen0794/Flink/_build/results?buildId=249=logs=fba17979-6d2e-591d-72f1-97cf42797c11=443dc6bf-b240-56df-6acf-c882d4b238da=21533
> Python Test failed with "OSError: [Errno 12] Cannot allocate memory" in Azure 
> Pipeline. I am not sure if it is caused by insufficient machine memory on 
> Azure.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] HuangXingBo closed pull request #15481: [FLINK-22076][python] Split the global test into multiple module tests

2021-04-02 Thread GitBox


HuangXingBo closed pull request #15481:
URL: https://github.com/apache/flink/pull/15481


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rkhachatryan commented on pull request #15466: [FLINK-22003][checkpointing] Prevent checkpoint from starting if any Source isn't running

2021-04-02 Thread GitBox


rkhachatryan commented on pull request #15466:
URL: https://github.com/apache/flink/pull/15466#issuecomment-812478658


   Thanks @gaoyunhaii , I think you are right, I'll update the test.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812478187


   @fsk119 I see you also added a test for partition pushdown, are those 
related?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Assigned] (FLINK-20103) Improve test coverage for network stack

2021-04-02 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-20103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski reassigned FLINK-20103:
--

Assignee: Piotr Nowojski  (was: Roman Khachatryan)

> Improve test coverage for network stack
> ---
>
> Key: FLINK-20103
> URL: https://issues.apache.org/jira/browse/FLINK-20103
> Project: Flink
>  Issue Type: Improvement
>  Components: Runtime / Checkpointing, Runtime / Network, Tests
>Reporter: Roman Khachatryan
>Assignee: Piotr Nowojski
>Priority: Major
> Fix For: 1.13.0
>
>
> This is a follow-up ticket after FLINK-20097.
> With the current setup (UnalignedITCase):
>  - race conditions are not detected reliably (1 per tens of runs)
>  - require changing the configuration (low checkpoint timeout)
>  - adding a new job graph often reveals a new bug
> An additional issue with the current setup is that it's difficult to git 
> bisect (for long ranges). 
> Changes that might hide the bugs:
>  - having Preconditions in ChannelStatePersister (slow down processing)
>  - some Preconditions may mask errors by causing job restart
>  - timings in tests (UnalignedITCase)
>  Some options to consider
>  # chaos monkey tests including induced latency and/or CPU bursts - on 
> different workloads/configs
>  # side-by-side tests with randomized inputs/configs
> Extending Jepsen coverage further (validating output) does not seem promising 
> in the context of Flink because it's output isn't linearisable.
>   
> Some tools for (1) that could be used:
> 1. https://github.com/chaosblade-io/chaosblade (docs need translation)
> 2. https://github.com/Netflix/chaosmonkey - requires spinnaker (CD)
> 3. jvm agent: https://github.com/mrwilson/byte-monkey
> 4. https://vmware.github.io/mangle/ - supports java method latency; ui 
> oriented?; not actively maintained?
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] YuvalItzchakov commented on a change in pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on a change in pull request #15307:
URL: https://github.com/apache/flink/pull/15307#discussion_r606178436



##
File path: 
flink-table/flink-table-planner-blink/src/test/java/org/apache/flink/table/planner/factories/TestValuesTableFactory.java
##
@@ -923,6 +924,7 @@ private TestValuesScanTableSourceWithWatermarkPushDown(
 Map, Collection> data,
 String tableName,
 boolean nestedProjectionSupported,
+boolean bounded,

Review comment:
   I thought I needed this to test the Watermark + Pushdown mechanism but 
turns out I don't. Will revert.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on a change in pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on a change in pull request #15307:
URL: https://github.com/apache/flink/pull/15307#discussion_r606178294



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushFilterIntoTableSourceScanAcrossWatermarkRule.java
##
@@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.TableScan;
+
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import 
org.apache.flink.table.connector.source.abilities.SupportsFilterPushDown;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkRelFactories;
+import org.apache.flink.table.planner.plan.abilities.source.FilterPushDownSpec;
+import 
org.apache.flink.table.planner.plan.abilities.source.SourceAbilityContext;
+import org.apache.flink.table.planner.plan.abilities.source.SourceAbilitySpec;
+import 
org.apache.flink.table.planner.plan.nodes.calcite.LogicalWatermarkAssigner;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.ShortcutUtils;
+
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalFilter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.TimeZone;
+
+import scala.Tuple2;
+
+/**
+ * Pushes a {@link LogicalFilter} past a {@link LogicalWatermarkAssigner} and 
into {@link LogicalTableScan}
+ * in case the table scan implements {@link SupportsFilterPushDown}.
+ */
+public class PushFilterIntoTableSourceScanAcrossWatermarkRule extends 
PushFilterIntoSourceScanRuleBase {
+public static final PushFilterIntoTableSourceScanAcrossWatermarkRule 
INSTANCE = new PushFilterIntoTableSourceScanAcrossWatermarkRule();
+
+public PushFilterIntoTableSourceScanAcrossWatermarkRule() {
+super(
+operand(LogicalFilter.class, 
operand(LogicalWatermarkAssigner.class, operand(
+LogicalTableScan.class, none(,
+FlinkRelFactories.FLINK_REL_BUILDER(),
+"PushFilterIntoTableSourceScanAcrossWatermarkRule");
+}
+
+@Override
+public void onMatch(RelOptRuleCall call) {
+LogicalFilter filter = call.rel(0);
+LogicalWatermarkAssigner watermarkAssigner = call.rel(1);
+LogicalTableScan scan = call.rel(2);
+
+TableSourceTable oldTableSourceTable = 
scan.getTable().unwrap(TableSourceTable.class);
+
+RelBuilder relBuilder = call.builder();
+FlinkContext context = ShortcutUtils.unwrapContext(scan);
+int maxCnfNodeCount = FlinkRelOptUtil.getMaxCnfNodeCount(scan);
+RexNodeToExpressionConverter converter =
+new RexNodeToExpressionConverter(
+relBuilder.getRexBuilder(),
+
filter.getInput().getRowType().getFieldNames().toArray(new String[0]),
+context.getFunctionCatalog(),
+context.getCatalogManager(),
+
TimeZone.getTimeZone(context.getTableConfig().getLocalTimeZone()));
+
+Tuple2 tuple2 =
+RexNodeExtractor.extractConjunctiveConditions(
+filter.getCondition(),
+maxCnfNodeCount,
+relBuilder.getRexBuilder(),
+converter);
+RexNode[] convertiblePredicates = tuple2._1;
+RexNode[] unconvertedPredicates = tuple2._2;
+if (convertiblePredicates.length == 0) {
+// no condition can be translated to expression
+return;
+}
+
+// record size before 

[GitHub] [flink] YuvalItzchakov commented on a change in pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on a change in pull request #15307:
URL: https://github.com/apache/flink/pull/15307#discussion_r606177935



##
File path: 
flink-table/flink-table-planner-blink/src/main/java/org/apache/flink/table/planner/plan/rules/logical/PushFilterIntoTableSourceScanAcrossWatermarkRule.java
##
@@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to you under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.flink.table.planner.plan.rules.logical;
+
+
+import org.apache.calcite.rel.RelNode;
+import org.apache.calcite.rel.core.TableScan;
+
+import org.apache.flink.table.connector.source.DynamicTableSource;
+import 
org.apache.flink.table.connector.source.abilities.SupportsFilterPushDown;
+import org.apache.flink.table.planner.calcite.FlinkContext;
+import org.apache.flink.table.planner.calcite.FlinkRelFactories;
+import org.apache.flink.table.planner.plan.abilities.source.FilterPushDownSpec;
+import 
org.apache.flink.table.planner.plan.abilities.source.SourceAbilityContext;
+import org.apache.flink.table.planner.plan.abilities.source.SourceAbilitySpec;
+import 
org.apache.flink.table.planner.plan.nodes.calcite.LogicalWatermarkAssigner;
+import org.apache.flink.table.planner.plan.schema.TableSourceTable;
+import org.apache.flink.table.planner.plan.utils.FlinkRelOptUtil;
+import org.apache.flink.table.planner.plan.utils.RexNodeExtractor;
+import org.apache.flink.table.planner.plan.utils.RexNodeToExpressionConverter;
+import org.apache.flink.table.planner.utils.ShortcutUtils;
+
+import org.apache.calcite.plan.RelOptRuleCall;
+import org.apache.calcite.rel.core.Filter;
+import org.apache.calcite.rel.logical.LogicalFilter;
+import org.apache.calcite.rel.logical.LogicalTableScan;
+import org.apache.calcite.rex.RexNode;
+import org.apache.calcite.tools.RelBuilder;
+
+import java.util.Arrays;
+import java.util.List;
+import java.util.TimeZone;
+
+import scala.Tuple2;
+
+/**
+ * Pushes a {@link LogicalFilter} past a {@link LogicalWatermarkAssigner} and 
into {@link LogicalTableScan}
+ * in case the table scan implements {@link SupportsFilterPushDown}.
+ */
+public class PushFilterIntoTableSourceScanAcrossWatermarkRule extends 
PushFilterIntoSourceScanRuleBase {
+public static final PushFilterIntoTableSourceScanAcrossWatermarkRule 
INSTANCE = new PushFilterIntoTableSourceScanAcrossWatermarkRule();
+
+public PushFilterIntoTableSourceScanAcrossWatermarkRule() {
+super(
+operand(LogicalFilter.class, 
operand(LogicalWatermarkAssigner.class, operand(
+LogicalTableScan.class, none(,
+FlinkRelFactories.FLINK_REL_BUILDER(),
+"PushFilterIntoTableSourceScanAcrossWatermarkRule");
+}
+
+@Override
+public void onMatch(RelOptRuleCall call) {
+LogicalFilter filter = call.rel(0);
+LogicalWatermarkAssigner watermarkAssigner = call.rel(1);
+LogicalTableScan scan = call.rel(2);
+
+TableSourceTable oldTableSourceTable = 
scan.getTable().unwrap(TableSourceTable.class);
+
+RelBuilder relBuilder = call.builder();
+FlinkContext context = ShortcutUtils.unwrapContext(scan);
+int maxCnfNodeCount = FlinkRelOptUtil.getMaxCnfNodeCount(scan);
+RexNodeToExpressionConverter converter =
+new RexNodeToExpressionConverter(
+relBuilder.getRexBuilder(),
+
filter.getInput().getRowType().getFieldNames().toArray(new String[0]),
+context.getFunctionCatalog(),
+context.getCatalogManager(),
+
TimeZone.getTimeZone(context.getTableConfig().getLocalTimeZone()));
+
+Tuple2 tuple2 =
+RexNodeExtractor.extractConjunctiveConditions(
+filter.getCondition(),
+maxCnfNodeCount,
+relBuilder.getRexBuilder(),
+converter);
+RexNode[] convertiblePredicates = tuple2._1;
+RexNode[] unconvertedPredicates = tuple2._2;
+if (convertiblePredicates.length == 0) {
+// no condition can be translated to expression
+return;
+}
+
+// record size before 

[GitHub] [flink] YuvalItzchakov edited a comment on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov edited a comment on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812471049


   @fsk119 I'm confused with all the suggestions. Should I keep working off 
this branch but have this as a logical rewrite instead? Are all the comments 
you left above still valid?
   
   If I instead take the code implemented by your PR, there's no need for the 
reusable base class here, so I want to know which one it is :)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] YuvalItzchakov commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


YuvalItzchakov commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812471049


   @fsk119 I'm confused with all the suggestions. Should I keep working off 
this branch but have this as a logical rewrite instead? Are all the comments 
you left above still valid?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (FLINK-21247) flink iceberg table map cannot convert to datastream

2021-04-02 Thread Zheng Hu (Jira)


[ 
https://issues.apache.org/jira/browse/FLINK-21247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17313779#comment-17313779
 ] 

Zheng Hu commented on FLINK-21247:
--

I think this will need to get merged in flink 1.13.0,  otherwise the newly 
released flink won't be able to read iceberg map data type,  this's a critical 
bug.

> flink iceberg table map cannot convert to datastream
> ---
>
> Key: FLINK-21247
> URL: https://issues.apache.org/jira/browse/FLINK-21247
> Project: Flink
>  Issue Type: New Feature
>  Components: Table SQL / Ecosystem
>Affects Versions: 1.12.1
> Environment: iceberg master
> flink 1.12
>  
>  
>Reporter: donglei
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2021-02-03-15-38-42-340.png, 
> image-2021-02-03-15-40-27-055.png, image-2021-02-03-15-41-34-426.png, 
> image-2021-02-03-15-43-19-919.png, image-2021-02-03-15-52-12-493.png, 
> image-2021-02-03-15-53-18-244.png
>
>
> Flink Iceberg Table with map
> !image-2021-02-03-15-38-42-340.png!
>  
> we want to read the table like this :
>  
> String querySql = "SELECT 
> ftime,extinfo,country,province,operator,apn,gw,src_ip_head,info_str,product_id,app_version,sdk_id,sdk_version,hardware_os,qua,upload_ip,client_ip,upload_apn,event_code,event_result,package_size,consume_time,event_value,event_time,upload_time,boundle_id,uin,platform,os_version,channel,brand,model
>  from bfzt3 ";
> Table table = tEnv.sqlQuery(querySql);
> DataStream sinkStream = tEnv.toAppendStream(table, 
> Types.POJO(AttaInfo.class, map));
> sinkStream.map(x->1).returns(Types.INT).keyBy(new 
> NullByteKeySelector()).reduce((x,y) -> {
>  return x+y;
> }).print();
>  
>  
> when read  we find a exception
>  
> 2021-02-03 15:37:57
> java.lang.ClassCastException: 
> org.apache.iceberg.flink.data.FlinkParquetReaders$ReusableMapData cannot be 
> cast to org.apache.flink.table.data.binary.BinaryMapData
> at 
> org.apache.flink.table.runtime.typeutils.MapDataSerializer.copy(MapDataSerializer.java:107)
> at 
> org.apache.flink.table.runtime.typeutils.MapDataSerializer.copy(MapDataSerializer.java:47)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copyRowData(RowDataSerializer.java:166)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:129)
> at 
> org.apache.flink.table.runtime.typeutils.RowDataSerializer.copy(RowDataSerializer.java:50)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:69)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:46)
> at 
> org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:26)
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:50)
> at 
> org.apache.flink.streaming.api.operators.CountingOutput.collect(CountingOutput.java:28)
> at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$ManualWatermarkContext.processAndCollect(StreamSourceContexts.java:317)
> at 
> org.apache.flink.streaming.api.operators.StreamSourceContexts$WatermarkContext.collect(StreamSourceContexts.java:411)
> at 
> org.apache.flink.streaming.api.functions.source.InputFormatSourceFunction.run(InputFormatSourceFunction.java:92)
> at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:110)
> at 
> org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:66)
> at 
> org.apache.flink.streaming.runtime.tasks.SourceStreamTask$LegacySourceFunctionThread.run(SourceStreamTask.java:241)
>  
> we find that iceberg map is  ReusableMapData implements MapData 
> !image-2021-02-03-15-40-27-055.png!
>  
> this is the exception 
> !image-2021-02-03-15-41-34-426.png!
> MapData has two default implements  GenericMapData and BinaryMapData
> from iceberg implement is ReusableMapData
>  
> so i think that code should change to like this 
> !image-2021-02-03-15-43-19-919.png!
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15280: [FLINK-21714][table-api] Use TIMESTAMP_LTZ as return type for function PROCTIME()

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15280:
URL: https://github.com/apache/flink/pull/15280#issuecomment-802654466


   
   ## CI report:
   
   * 3b4b5fcd9d8108b51e0bf62a0bee888b4fdb5186 UNKNOWN
   * 621905dc52204133a9d610a2d0c9a5f1aeaafd53 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16010)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15247: [FLINK-21833][Table SQL / Runtime] TemporalRowTimeJoinOperator.java will lead to the state expansion by short-life-cycle & huge RowDa

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15247:
URL: https://github.com/apache/flink/pull/15247#issuecomment-800773807


   
   ## CI report:
   
   * a30faa8be8081f0037cd56e24396ad292fe49de7 Azure: 
[SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16013)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Updated] (FLINK-22088) CheckpointCoordinator might not be able to abort triggering checkpoint if failover happens during triggering

2021-04-02 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-22088:
---
Affects Version/s: 1.13.0
   1.12.2

> CheckpointCoordinator might not be able to abort triggering checkpoint if 
> failover happens during triggering
> 
>
> Key: FLINK-22088
> URL: https://issues.apache.org/jira/browse/FLINK-22088
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Affects Versions: 1.12.2, 1.13.0
>Reporter: Yun Gao
>Priority: Minor
>
> Currently when job failover, it would try to cancel all the pending 
> checkpoint via CheckpointCoordinatorDeActivator#jobStatusChanges -> 
> stopCheckpointScheduler, it would try to cancel all the pending checkpoints 
> and also set periodicScheduling to false. 
> If at this time there is just one checkpoint start triggering, it might 
> acquire all the execution to trigger before failover and start triggering. 
> ideally it should be aborted in createPendingCheckpoint-> 
> preCheckGlobalState. However, since the check and creating pending checkpoint 
> is in two different scope, there might be cases the 
> CheckpointCoordinator#stopCheckpointScheduler happens during the two lock 
> scope. 
> We may optimize this checking; However, since the execution would finally 
> fail to trigger checkpoint, it should not affect the rightness of the job. 
> Besides, even if we optimize it, there might still be cases that the 
> execution trigger failed due to concurrent failover. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (FLINK-22088) CheckpointCoordinator might not be able to abort triggering checkpoint if failover happens during triggering

2021-04-02 Thread Piotr Nowojski (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-22088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Piotr Nowojski updated FLINK-22088:
---
Component/s: Runtime / Checkpointing

> CheckpointCoordinator might not be able to abort triggering checkpoint if 
> failover happens during triggering
> 
>
> Key: FLINK-22088
> URL: https://issues.apache.org/jira/browse/FLINK-22088
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Checkpointing
>Reporter: Yun Gao
>Priority: Minor
>
> Currently when job failover, it would try to cancel all the pending 
> checkpoint via CheckpointCoordinatorDeActivator#jobStatusChanges -> 
> stopCheckpointScheduler, it would try to cancel all the pending checkpoints 
> and also set periodicScheduling to false. 
> If at this time there is just one checkpoint start triggering, it might 
> acquire all the execution to trigger before failover and start triggering. 
> ideally it should be aborted in createPendingCheckpoint-> 
> preCheckGlobalState. However, since the check and creating pending checkpoint 
> is in two different scope, there might be cases the 
> CheckpointCoordinator#stopCheckpointScheduler happens during the two lock 
> scope. 
> We may optimize this checking; However, since the execution would finally 
> fail to trigger checkpoint, it should not affect the rightness of the job. 
> Besides, even if we optimize it, there might still be cases that the 
> execution trigger failed due to concurrent failover. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [flink] flinkbot edited a comment on pull request #15482: [FLINK-22103][hive] Fix HiveModuleTest for 1.2.1

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15482:
URL: https://github.com/apache/flink/pull/15482#issuecomment-812447703


   
   ## CI report:
   
   * 2d9cd4f63f2e806f0a9a0466e4e6b6cbba02bd89 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16021)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15425: [FLINK-21133][connector/checkpoint] Fix the stop-with-savepoint case …

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15425:
URL: https://github.com/apache/flink/pull/15425#issuecomment-809890369


   
   ## CI report:
   
   * 1b248e49e4695b3ac7d21320a377fb74ff460687 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16020)
 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16003)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Closed] (FLINK-21963) ReactiveModelITCase.testScaleDownOnTaskManagerLoss failed / hangs

2021-04-02 Thread Robert Metzger (Jira)


 [ 
https://issues.apache.org/jira/browse/FLINK-21963?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Metzger closed FLINK-21963.
--
Resolution: Fixed

Resolved in 
https://github.com/apache/flink/commit/ac6317fd5bcaf90a88dee74e0171a55933395907

> ReactiveModelITCase.testScaleDownOnTaskManagerLoss failed / hangs
> -
>
> Key: FLINK-21963
> URL: https://issues.apache.org/jira/browse/FLINK-21963
> Project: Flink
>  Issue Type: Bug
>  Components: Runtime / Coordination, Tests
>Affects Versions: 1.13.0
>Reporter: Matthias
>Assignee: Robert Metzger
>Priority: Critical
>  Labels: pull-request-available, test-stability
> Fix For: 1.13.0
>
>
> [This 
> build|https://dev.azure.com/mapohl/flink/_build/results?buildId=360=logs=e0582806-6d85-5dc5-7eb4-4289d3d0de6b=9fea6cf4-6ce3-5c26-d059-69f4d4cec7d1=4442]
>  failed (not exclusively) due to 
> {{ReactiveModelITCase.testScaleDownOnTaskManagerLoss}}.
> I was able to reproduce it locally having the {{DefaultScheduler}} enabled. 
> The test seems to get into an infinite loop:
> {code}
> [...]
> 76125 [flink-akka.actor.default-dispatcher-4] INFO  
> org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Un-registering task 
> and sending final execution state FAILED to JobManager for task Source: 
> Custom Source -> Sink: Unnamed (4/4)#8738 92b920a905c55fc85a76c79b3acef161.
> 76125 [flink-akka.actor.default-dispatcher-2] DEBUG 
> org.apache.flink.runtime.scheduler.adaptive.allocator.SharedSlot [] - 
> Returning logical slot to shared slot 
> (SlotRequestId{0896a914cffb9d6631dc061ff4f485b4})
> 76125 [flink-akka.actor.default-dispatcher-2] DEBUG 
> org.apache.flink.runtime.scheduler.adaptive.allocator.SharedSlot [] - Release 
> shared slot externally (SlotRequestId{0896a914cffb9d6631dc061ff4f485b4})
> 76125 [flink-akka.actor.default-dispatcher-2] DEBUG 
> org.apache.flink.runtime.jobmaster.slotpool.DefaultDeclarativeSlotPool [] - 
> Free reserved slot aec00279d7404b26a104ee906695d27a.
> 76125 [flink-akka.actor.default-dispatcher-2] DEBUG 
> org.apache.flink.runtime.scheduler.adaptive.allocator.SharedSlot [] - Release 
> shared slot (SlotRequestId{0896a914cffb9d6631dc061ff4f485b4})
> 76125 [flink-akka.actor.default-dispatcher-2] DEBUG 
> org.apache.flink.runtime.scheduler.adaptive.AdaptiveScheduler [] - Transition 
> from state Executing to Restarting.
> 76125 [flink-akka.actor.default-dispatcher-2] INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Job Flink 
> Streaming Job (4b5f437c7c47c8be9f8d8bf08e78910a) switched from state RUNNING 
> to CANCELLING.
> 76125 [flink-akka.actor.default-dispatcher-2] INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: Custom 
> Source -> Sink: Unnamed (2/4) (843d0c154f55a15a9bb1e705ae282032) switched 
> from RUNNING to CANCELING.
> 76125 [flink-akka.actor.default-dispatcher-2] INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: Custom 
> Source -> Sink: Unnamed (3/4) (ba0dd94db26abc376ee73522410b8094) switched 
> from RUNNING to CANCELING.
> 76125 [flink-akka.actor.default-dispatcher-2] INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: Custom 
> Source -> Sink: Unnamed (4/4) (92b920a905c55fc85a76c79b3acef161) switched 
> from RUNNING to CANCELING.
> 76126 [flink-akka.actor.default-dispatcher-4] DEBUG 
> org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Cannot find task to 
> stop for execution 843d0c154f55a15a9bb1e705ae282032.
> 76126 [flink-akka.actor.default-dispatcher-3] DEBUG 
> org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Cannot find task to 
> stop for execution ba0dd94db26abc376ee73522410b8094.
> 76126 [flink-akka.actor.default-dispatcher-3] DEBUG 
> org.apache.flink.runtime.taskexecutor.TaskExecutor [] - Cannot find task to 
> stop for execution 92b920a905c55fc85a76c79b3acef161.
> 76126 [flink-akka.actor.default-dispatcher-2] INFO  
> org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Source: Custom 
> Source -> Sink: Unnamed (3/4) (ba0dd94db26abc376ee73522410b8094) switched 
> from CANCELING to CANCELED.
> 76126 [flink-akka.actor.default-dispatcher-2] DEBUG 
> org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Ignoring 
> transition of vertex Source: Custom Source -> Sink: Unnamed (3/4) - execution 
> #8738 to FAILED while being CANCELED.
> 76126 [flink-akka.actor.default-dispatcher-2] DEBUG 
> org.apache.flink.runtime.scheduler.adaptive.allocator.SharedSlot [] - 
> Returning logical slot to shared slot 
> (SlotRequestId{786f89cafa4833afb26d0eb5da265a38})
> 76126 [flink-akka.actor.default-dispatcher-2] DEBUG 
> org.apache.flink.runtime.scheduler.adaptive.allocator.SharedSlot [] - Release 
> shared slot externally 

[GitHub] [flink] rmetzger closed pull request #15417: [FLINK-21963] Harden ReactiveModeITCase

2021-04-02 Thread GitBox


rmetzger closed pull request #15417:
URL: https://github.com/apache/flink/pull/15417


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot commented on pull request #15482: [FLINK-22103][hive] Fix HiveModuleTest for 1.2.1

2021-04-02 Thread GitBox


flinkbot commented on pull request #15482:
URL: https://github.com/apache/flink/pull/15482#issuecomment-812447703


   
   ## CI report:
   
   * 2d9cd4f63f2e806f0a9a0466e4e6b6cbba02bd89 UNKNOWN
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15481: [FLINK-22076][python] Split the global test into multiple module tests

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15481:
URL: https://github.com/apache/flink/pull/15481#issuecomment-812435755


   
   ## CI report:
   
   * e05738d21a61981ede35c8a3e6e6676f03d7d470 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16019)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] flinkbot edited a comment on pull request #15425: [FLINK-21133][connector/checkpoint] Fix the stop-with-savepoint case …

2021-04-02 Thread GitBox


flinkbot edited a comment on pull request #15425:
URL: https://github.com/apache/flink/pull/15425#issuecomment-809890369


   
   ## CI report:
   
   * 1b248e49e4695b3ac7d21320a377fb74ff460687 Azure: 
[FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16003)
 Azure: 
[PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=16020)
 
   
   
   Bot commands
 The @flinkbot bot supports the following commands:
   
- `@flinkbot run travis` re-run the last Travis build
- `@flinkbot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] fsk119 commented on pull request #15307: [FLINK-21675][table-planner-blink] Allow Predicate Pushdown with Watermark Assigner Between Filter and Scan

2021-04-02 Thread GitBox


fsk119 commented on pull request #15307:
URL: https://github.com/apache/flink/pull/15307#issuecomment-812446500


   Please also rebase to the latest branch...


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [flink] rmetzger commented on pull request #15417: [FLINK-21963] Harden ReactiveModeITCase

2021-04-02 Thread GitBox


rmetzger commented on pull request #15417:
URL: https://github.com/apache/flink/pull/15417#issuecomment-812446102


   Thanks a lot for your review! I'll merge this change now with your comments 
addressed!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




  1   2   3   4   >