and btw it is interesting to notice that AWS seems to do the approach
that I suggested first.
All functions are SQL standard compliant, and only dedicated functions
with a prefix such as CURRENT_ROW_TIMESTAMP divert from the standard.
Regards,
Timo
On 01.03.21 08:45, Timo Walther wrote:
How
Thanks for driving this discussion, Konstantin.
I like the idea of having a bot reminding reporter/assignee/watchers about
inactive tickets and if needed downgrade/close them automatically.
My two cents:
We may have labels like "downgraded-by-bot" / "closed-by-bot", so that it's
easier to filter
How about we simply go for your first approach by having [query-start,
row, auto] as configuration parameters where [auto] is the default?
This sounds like a good consensus where everyone is happy, no?
This also allows user to restore the old per-row behavior for all
functions that we had
We could also think about reading this config option in Table API. The
effect would be to call `await()` directly in an execute call. I could
also imagine this to be useful esp. when you fire a lot of insert into
queries. We had the case before that users where confused that the
execution
Lsw_aka_laplace created FLINK-21532:
---
Summary: Make CatalogTableImpl#toProperties and
CatalogTableImpl#fromProperties case sensitive
Key: FLINK-21532
URL: https://issues.apache.org/jira/browse/FLINK-21532
(Sorry that I repeat this mail since the last one is not added into the same
mail list thread,
very sorry for the inconvenience)
Hi all,
Very thanks for all the deep thoughts!
> How to implement the stop-with-savepoint --drain/terminate command with
> this model: One idea could be to tell
Hi all
Very thanks for all the deep thoughts!
> How to implement the stop-with-savepoint --drain/terminate command with
> this model: One idea could be to tell the sources that they should stop
> reading. This should trigger the EndOfPartitionEvent to be sent
> downstream.
> This will
Rui Li created FLINK-21531:
--
Summary: Introduce pluggable Parser
Key: FLINK-21531
URL: https://issues.apache.org/jira/browse/FLINK-21531
Project: Flink
Issue Type: Sub-task
Components:
Tzu-Li (Gordon) Tai created FLINK-21530:
---
Summary: Precompute TypeName's canonical string representation
Key: FLINK-21530
URL: https://issues.apache.org/jira/browse/FLINK-21530
Project: Flink
Rui Li created FLINK-21529:
--
Summary: FLIP-152: Hive Query Syntax Compatibility
Key: FLINK-21529
URL: https://issues.apache.org/jira/browse/FLINK-21529
Project: Flink
Issue Type: New Feature
I also asked some users about their opinion that if we introduce some
config prefixed with "table" but doesn't
have affection with methods in Table API and SQL. All of them are kind of
shocked by such question, asking
why we would do anything like this.
This kind of reaction actually doesn't
wxmimperio created FLINK-21528:
--
Summary: Rest Api Support Wildcard
Key: FLINK-21528
URL: https://issues.apache.org/jira/browse/FLINK-21528
Project: Flink
Issue Type: Improvement
Spongebob created FLINK-21527:
-
Summary: fromValues function cost enormous network buffer
Key: FLINK-21527
URL: https://issues.apache.org/jira/browse/FLINK-21527
Project: Flink
Issue Type: Bug
Zhilong Hong created FLINK-21526:
Summary: Replace scheduler benchmarks with wrapper classes
Key: FLINK-21526
URL: https://issues.apache.org/jira/browse/FLINK-21526
Project: Flink
Issue
Zhilong Hong created FLINK-21525:
Summary: Move scheduler benchmarks to Flink and add unit tests
Key: FLINK-21525
URL: https://issues.apache.org/jira/browse/FLINK-21525
Project: Flink
Issue
Zhilong Hong created FLINK-21524:
Summary: Replace scheduler benchmarks with wrapper classes
Key: FLINK-21524
URL: https://issues.apache.org/jira/browse/FLINK-21524
Project: Flink
Issue
In “stop-with-savepoint —drain”, MAX_WATERMARK is not an issue. For normal
finishing task, not allowing unaligned checkpoint does not solve the
problem as MAX_WATERMARK could be persisted in downstream task. When
scenario @Piotr depicted occurs, downstream(or further downstream) window
operator
zouyunhe created FLINK-21523:
Summary: ArrayIndexOutOfBoundsException occurs while run a hive
streaming source job with partitioned table source
Key: FLINK-21523
URL:
Hey Roman,
Thank you very much for preparing RC2.
+1 from my side.
1. Verified Checksums and GPG signatures.
2. Verified that the source archives do not contain any binaries.
3. Successfully Built the source with Maven.
4. Started a local Flink cluster, ran the streaming WordCount example with
I think you are right with the problem of endOfInput. endOfInput should not
be used to commit final results. In fact if this termination fails then we
might end up in a different outcome of the job which is equally valid as
the one before the failure.
Concerning unaligned checkpoints, I think
Kezhu Wang created FLINK-21522:
--
Summary: Iterative stream could not work with stop-with-savepoint
Key: FLINK-21522
URL: https://issues.apache.org/jira/browse/FLINK-21522
Project: Flink
Issue
Hi Till,
Just for bookkeeping, some observations from current implementation.
> With this model, the final checkpoint is quite simple because it is
ingrained in the lifecycle of an operator. Differently said an operator
will only terminate after it has committed its side effects and seen the
22 matches
Mail list logo