Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4471#discussion_r158489057
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamJoin.scala
---
@@ -0,0 +1,207
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4471#discussion_r158489047
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/util/UpdatingPlanChecker.scala
---
@@ -116,14 +135,100 @@ object
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4471#discussion_r158489017
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/join/DataStreamInnerJoin.scala
---
@@ -0,0 +1,285
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4471#discussion_r158489035
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/util/UpdatingPlanChecker.scala
---
@@ -116,14 +135,100 @@ object
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4471#discussion_r158488989
--- Diff:
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/runtime/stream/table/JoinITCase.scala
---
@@ -0,0 +1,262
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4471#discussion_r158489025
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/join/DataStreamInnerJoin.scala
---
@@ -0,0 +1,285
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/4471
Hi, @twalthr , thanks for your review.
The pr has been updated according to your comments. It mainly contains the
following changes:
- Do some minor refactors in `UpdatingPlanChecker
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/4471
hi @twalthr , Sorry for the late reply. The notification of github has been
ignored mistakenly. I will give an update ASAP. Thanks very much.
---
Github user hequn8128 closed the pull request at:
https://github.com/apache/flink/pull/5094
---
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/5094
Ops... finally understand what you mean. Thank your very much for your
explain. @fhueske @xccui . I will close this pr :)
---
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/5094
Thanks for your replies.
I agree with that it is valid to join late input data.What I concern is the
watermark has not been hold back correctly.
Take `testRowTimeJoinWithCommonBounds2
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/5094
Thanks for your replies.
@fhueske : The watermark must be aligned with timestamps, and it is the
main reason why watermarks are hold back(right?). Current window join may
output a record with
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/5094
Hi @xccui , thanks for your reply. Feel free to take it if you wish. I
still have some confusions.
1. Considering the test `testRowTimeJoinWithCommonBounds2` in
`JoinHarnessTest`, do you mean
GitHub user hequn8128 opened a pull request:
https://github.com/apache/flink/pull/5094
[FLINK-8158] [table] Fix rowtime window inner join emits late data bug
## What is the purpose of the change
This pull request fixes rowtime window inner join emits late data bug
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4471#discussion_r148947768
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/join/DataStreamInnerJoin.scala
---
@@ -0,0 +1,287
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4471#discussion_r148947767
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/join/DataStreamInnerJoin.scala
---
@@ -0,0 +1,287
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4471#discussion_r148947760
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/rules/datastream/DataStreamJoinRule.scala
---
@@ -0,0 +1,112
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4471#discussion_r148947758
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/util/UpdatingPlanChecker.scala
---
@@ -56,16 +58,20 @@ object
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/4471
Hi @xccui ,
Thanks for your review. I have updated the PR according to your comments.
@fhueske It would be great if you can also take a look.
Thank you, Hequn.
---
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/4471
Hi @fhueske , the pr has been updated according to your comments and also
has been rebased to the latest master. The pr mainly includes the following
changes:
1. Refactor `UpdatingPlanChecker
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/4674
hi, @fhueske
I was using sliding window and `NotSerializableException` was thrown when
doing incremental snapshot. This can be fixed by implementing
`SingleElementIterable` with
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/4471
@fhueske hi fabian, sorry for the late update, i will resolve the
conflicts ASAP, a busy weekend :)
---
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4471#discussion_r143320024
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/rules/datastream/DataStreamJoinRule.scala
---
@@ -0,0 +1,93
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/4624
Hi @fhueske ,
Thanks for your review, I have addressed all your comments and rebased the
code to the master :)
---
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/4624
@wuchong Thanks for your review, I have updated the PR according to your
comments.
---
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/4624#discussion_r141354970
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/functions/UserDefinedFunction.scala
---
@@ -41,7 +41,7 @@ abstract class
GitHub user hequn8128 opened a pull request:
https://github.com/apache/flink/pull/4674
[FLINK-7627] [table] SingleElementIterable should implement with
Serializable
## What is the purpose of the change
*This pull request is a bugfix which implements
GitHub user hequn8128 opened a pull request:
https://github.com/apache/flink/pull/4624
[FLINK-7410] [table] Use toString method to display operator names for
UserDefinedFunction
## What is the purpose of the change
*Use toString method to display operator names for
GitHub user hequn8128 opened a pull request:
https://github.com/apache/flink/pull/4471
[FLINK-6094] [table] Implement stream-stream proctime non-window inner join
## What is the purpose of the change
Implement stream-stream proctime non-window inner join for table-api
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/4421
OK, so RocksDB state provides the keys in sorted order. Then we needn't
remove `!validTimestamp`. I will fill out the PR template next time.
Thanks, hequn
---
If your project is set u
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/4421
Also, I think we need to remove `!validTimestamp` from `while
(keyIter.hasNext && !validTimestamp) {` . Because, new records may be inserted
at the head of rowMapState, in this case expi
GitHub user hequn8128 opened a pull request:
https://github.com/apache/flink/pull/4421
[FLINK-7298] [table] Records can be cleared all at once when all data in
state is invalid
In `ProcTimeWindowInnerJoin`.`expireOutTimeRow`, we need not to remove
records one by one from state
GitHub user hequn8128 opened a pull request:
https://github.com/apache/flink/pull/3850
[FLINK-6486] [table] (bugfix) Pass RowTypeInfo to CodeGenerator instead of
CRowTypeInfo
For now CodeGenerator only processes Row type, so we should pass
RowTypeInfo to CodeGenerator instead of
Github user hequn8128 closed the pull request at:
https://github.com/apache/flink/pull/3792
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user hequn8128 opened a pull request:
https://github.com/apache/flink/pull/3792
[FLINK-6093][table] Implement and turn on retraction for table sink
The PR mainly includes the following changes:
1. Make `CRow` internal and revert all related code exposing `CRow`.
2
Github user hequn8128 closed the pull request at:
https://github.com/apache/flink/pull/3733
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/3733
hi @fhueske , thanks a lot for your review and help. I have addressed all
your comments and updated the PR. All changes have been checked before my
latest update.
Thanks, Hequn
---
If your
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/3733
Hi @fhueske , the latest commit is a bugfix. It mainly includes the
following contents:
1. request type add `CRow` support in `TableEnvironment`ï¼so we can sink
with `CRow`
2
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3733#discussion_r112817965
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamGroupWindowAggregate.scala
---
@@ -107,6
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3733#discussion_r112817995
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/GroupAggProcessFunction.scala
---
@@ -84,16 +109,38
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3733#discussion_r112817910
--- Diff:
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/stream/RetractionITCase.scala
---
@@ -0,0 +1,310
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3733#discussion_r112817968
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamOverAggregate.scala
---
@@ -129,11 +131,17
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3733#discussion_r112818000
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/GroupAggProcessFunction.scala
---
@@ -84,16 +109,38
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3733#discussion_r112817984
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/runtime/aggregate/GroupAggProcessFunction.scala
---
@@ -68,12 +70,35 @@ class
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3733#discussion_r112818005
--- Diff:
flink-libraries/flink-table/src/test/scala/org/apache/flink/table/api/scala/stream/RetractionITCase.scala
---
@@ -0,0 +1,310
GitHub user hequn8128 opened a pull request:
https://github.com/apache/flink/pull/3733
[FLINK-6091] [table] Implement and turn on retraction for aggregates
Implement functions for generating and consuming retract messages for
different aggregates.
1. add delete/add
Github user hequn8128 closed the pull request at:
https://github.com/apache/flink/pull/3696
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/3696
hi @fhueske , Thanks for your review and refactorings. I think it's pretty
good and I have learned a lot form it. I left a few comments in your PR.
As for the hardcode problem, I thin
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3696#discussion_r111372529
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamRetractionRule.scala
---
@@ -0,0 +1,341
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3696#discussion_r111320841
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/plan/nodes/datastream/DataStreamCorrelate.scala
---
@@ -36,14 +36,14
GitHub user hequn8128 opened a pull request:
https://github.com/apache/flink/pull/3696
[FLINK-6090] [table] Add RetractionRule at the stage of decoration
Add RetractionRules at the stage of decoration. These rules can derive
NeedRetraction property and accumulating mode. There are
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/3564
hi @fhueske , thanks for the review. I was expecting some other batch
use-cases where we may need to rewrite the logical plan after volcano. But you
are right, we do not need to add it into
Github user hequn8128 commented on a diff in the pull request:
https://github.com/apache/flink/pull/3564#discussion_r107603328
--- Diff:
flink-libraries/flink-table/src/main/scala/org/apache/flink/table/api/TableEnvironment.scala
---
@@ -162,6 +162,27 @@ abstract class
Github user hequn8128 commented on the issue:
https://github.com/apache/flink/pull/3564
@twalthr Thanks for the review. I have addressed all your comments and
updated the PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
GitHub user hequn8128 opened a pull request:
https://github.com/apache/flink/pull/3564
[FLINK-6089] [table] Implement decoration phase for rewriting predicated
logical plan after volcano optimization phase
At present, there is no chance to modify the DataStreamRel tree after the
101 - 155 of 155 matches
Mail list logo