Hi Till,
Thanks for the comments and questions. To be clear, I am not saying that
the side-input stream does not work for the dynamic pattern update use
case. But I think OC is a better solution. The design goal for CEP dynamic
pattern is not only make it work, but also make it user friendly and
Hi David,
We have a description in the FLIP about the case of TM failure without
the remote shuffle service. Basically, since the partitions are stored
at the TM, a TM failure requires recomputing the intermediate result.
If a Flink job uses the remote shuffle service, the partitions are
stored
Hi,
Very thanks for initiating the discussion!
Also +1 to drop the current DataSet based Gelly library so that we could
finally drop the
legacy DataSet API.
For whether to keep the graph computing ability, from my side graph query /
graph computing and
chaining them with the preprocessing
Hi Francesco,
Thanks a lot for the feedback.
> does it makes sense for a lookup join to use hash distribution whenever
is possible by default?
I prefer to enable the hash lookup join only find the hint in the query for
the following reason:
1. Plan compatibility
Add a hash shuffle by default
Zongwen Li created FLINK-25521:
--
Summary: The composition description of 'INTERVAL DAY TO SECOND'
in the Data Type is incorrect
Key: FLINK-25521
URL: https://issues.apache.org/jira/browse/FLINK-25521
+1(binding).
Thank you for driving this
Best,
Guowei
On Wed, Jan 5, 2022 at 5:15 AM Arvid Heise wrote:
> +1 (binding).
>
> Thanks for driving!
>
> On Tue, Jan 4, 2022 at 10:31 AM Yun Gao
> wrote:
>
> > +1 (binding).
> >
> > Very thanks for proposing the FLIP!
> >
> > Best,
> > Yun
> >
> >
> >
Jingsong Lee created FLINK-25520:
Summary: Implement "ALTER TABLE ... COMPACT" SQL
Key: FLINK-25520
URL: https://issues.apache.org/jira/browse/FLINK-25520
Project: Flink
Issue Type: Sub-task
+1 (binding).
Thanks for driving!
On Tue, Jan 4, 2022 at 10:31 AM Yun Gao
wrote:
> +1 (binding).
>
> Very thanks for proposing the FLIP!
>
> Best,
> Yun
>
>
> --
> From:Martijn Visser
> Send Time:2022 Jan. 4 (Tue.) 17:22
> To:dev
Hi Zhipeng,
I think that we're seeing more code being externalised, for example with
the Flink Remote Shuffle service [1] and the ongoing discussion on the
external connector repository [2], it makes sense to go for your second
option. Maybe it fits under Flink Extended [3].
The main question
Martijn Visser created FLINK-25519:
--
Summary: Promote StreamExecutionEnvironment#fromSource() from
@Experimental to next phase
Key: FLINK-25519
URL: https://issues.apache.org/jira/browse/FLINK-25519
Hello,
I'm familiar with the Pub/Sub connectors from the Apache Beam project, but
quite a bit less so with Flink. This looks like a good learning
opportunity, and I'd be interested in helping out here.
If we decide to keep the connector, I can start taking a look at the next
step: going through
Hi,
thanks Martijn for bringing it up for discussion. I think we could make the
discussion a little bit clearer by splitting it into two questions:
1. should Flink drop Gelly?
2. should Flink drop the graph computing?
The answer of the first question could be yes, since there have been no
Francesco Guardiani created FLINK-25518:
---
Summary: Harden JSON Serialization utilities
Key: FLINK-25518
URL: https://issues.apache.org/jira/browse/FLINK-25518
Project: Flink
Issue
Hi Martijin,
Thanks for the feedback. I am not proposing to bundle the new graph
library with Alink. I am +1 for dropping the DataSet-based Gelly library,
but we probably need a new graph library in Flink for the possible
migration.
We haven't decided what to do yet and probably need more
Thanks for the experiment. +1 for the changes.
Yingjie Cao 于2022年1月4日周二 17:35写道:
> Hi all,
>
> After running some tests with the proposed default value (
> taskmanager.network.sort-shuffle.min-parallelism: 1,
> taskmanager.network.sort-shuffle.min-buffers: 512,
>
Hi everyone,
With the latest CVEs around log4j, we have seen that certain functionality
of the JVM can be quite dangerous. Concretely, the JNDI functionality [1]
seems to open quite a large attack vector against JVMs which has been used
in the log4j CVE case.
In order to avoid these kinds of
Hi everyone,
We (team in seznam.cz) are actually using the Gelly library for batch
anomaly detection in our graphs. It will be very nice to somehow keep this
functionality, maybe in a separate repository. Is there any replacement?
Best,
Lukas
On Mon, Jan 3, 2022 at 2:20 PM Martijn Visser
Wenjun Ruan created FLINK-25517:
---
Summary: Extract duplicate code in JarFileChecker
Key: FLINK-25517
URL: https://issues.apache.org/jira/browse/FLINK-25517
Project: Flink
Issue Type:
Timo Walther created FLINK-25516:
Summary: Add catalog object compile/restore options
Key: FLINK-25516
URL: https://issues.apache.org/jira/browse/FLINK-25516
Project: Flink
Issue Type:
宇宙先生 created FLINK-25515:
Summary: flink-kafka-consumer
Key: FLINK-25515
URL: https://issues.apache.org/jira/browse/FLINK-25515
Project: Flink
Issue Type: Improvement
Affects Versions: 1.11.2
Niklas Semmler created FLINK-25514:
--
Summary: YARN session tests scan unrelated logs for bugs
Key: FLINK-25514
URL: https://issues.apache.org/jira/browse/FLINK-25514
Project: Flink
Issue
Hi all,
After running some tests with the proposed default value (
taskmanager.network.sort-shuffle.min-parallelism: 1,
taskmanager.network.sort-shuffle.min-buffers: 512,
taskmanager.memory.framework.off-heap.batch-shuffle.size: 64m,
taskmanager.network.blocking-shuffle.compression.enabled:
+1 (binding).
Very thanks for proposing the FLIP!
Best,
Yun
--
From:Martijn Visser
Send Time:2022 Jan. 4 (Tue.) 17:22
To:dev
Subject:Re: [VOTE] FLIP-191: Extend unified Sink interface to support small
file compaction
+1
+1 (non-binding). Thanks for driving the FLIP!
Best regards,
Martijn
On Tue, 4 Jan 2022 at 10:21, Fabian Paul wrote:
> Hi everyone,
>
> I'd like to start a vote on FLIP-191: Extend unified Sink interface to
> support small file compaction [1] that has been discussed in this
> thread [2].
>
>
Hi everyone,
I'd like to start a vote on FLIP-191: Extend unified Sink interface to
support small file compaction [1] that has been discussed in this
thread [2].
The vote will be open for at least 72 hours unless there is an objection or
not enough votes.
Best,
Fabian
[1]
Thanks!
Great job!
On Thu, Dec 30, 2021 at 11:10 PM Till Rohrmann wrote:
>
> This is really great news. Thanks a lot for all the effort Timo, Francesco
> and everyone else who was involved! I believe that this will make it a lot
> easier for our users to use any Scala version they want with
Hi Becket,
Thanks for the explanation. While I do agree that a general 2-way
communication pattern would be nice to have, I also believe that this
approach is probably at least one magnitude more complex to realize than
the side-input approach. Therefore, I really would like to understand why
chendan created FLINK-25513:
---
Summary: CoFlatMapFunction requires both two flat_maps to yield
something
Key: FLINK-25513
URL: https://issues.apache.org/jira/browse/FLINK-25513
Project: Flink
Yuan Mei created FLINK-25512:
Summary: Materialization Files are not cleaned up if no checkpoint
is using it
Key: FLINK-25512
URL: https://issues.apache.org/jira/browse/FLINK-25512
Project: Flink
Yuan Mei created FLINK-25511:
Summary: Leftovers after truncation are not be cleaned up if
pre-uploading is enabled
Key: FLINK-25511
URL: https://issues.apache.org/jira/browse/FLINK-25511
Project: Flink
30 matches
Mail list logo