Hi, all
After the discussion in [1], I would like to open a voting thread for
FLIP-144 [2],
which proposes to add a new native Kubernetes HA service.
The vote will be open until October 6th (72h + weekend), unless there is
an objection or not enough votes.
[1].
3. Make sense to me. And we could add a new HA solution "StatefulSet + PV +
FileSystem"
at any time if we need in the future.
Since there are no more open questions, I will start the voting now.
Thanks all for your comments and feedback. Feel feel to continue the
discussion if you get
other
Dian Fu created FLINK-19489:
---
Summary: SplitFetcherTest.testNotifiesWhenGoingIdleConcurrent gets
stuck
Key: FLINK-19489
URL: https://issues.apache.org/jira/browse/FLINK-19489
Project: Flink
Issue
Thanks, Aljoscha! That's really helpful.
I think I only want to do my cleanup when the task successfully finishes,
which means the cleanup should only be invoked when the task is
guaranteed not to be executed again in one given batch execution. Is there
any way to do so?
Thanks for your help!
Satyam Shekhar created FLINK-19488:
--
Summary: Failed compilation of generated class
Key: FLINK-19488
URL: https://issues.apache.org/jira/browse/FLINK-19488
Project: Flink
Issue Type: Bug
Piotr Nowojski created FLINK-19487:
--
Summary: Checkpoint start delay is always zero for single channel
tasks
Key: FLINK-19487
URL: https://issues.apache.org/jira/browse/FLINK-19487
Project: Flink
Thank you @Chesnay let me try this change .
On Thu, Oct 1, 2020 at 1:21 PM Chesnay Schepler wrote:
> You could also try using streams to make it a little more concise:
>
> directories.stream()
>.map(directory -> createInputStream(environment, directory))
>.reduce(DataStream::union)
>
appleyuchi created FLINK-19486:
--
Summary: expected: class
org.apache.flink.runtime.state.KeyGroupsStateHandle, but found: class
org.apache.flink.runtime.state.IncrementalRemoteKeyedStateHandle
Key: FLINK-19486
URL:
Hi,
Are the credentials usually supplied by setting them in the Kafka client
properties?
If so, you can set the client properties in remote modules as shown in [1].
Otherwise, could you briefly explain / point me to some link on the details
of how to authenticate for Confluent Kafka?
Best,
I'm trying to use Statefun with the fully-managed Confluent Kafka as my
ingress and egress. Where should I define my credentials when using the
remote module?
Hi!
Yes, AbstractRichFunction.close() would be the right place to do
cleanup. This method is called both in case of successful finishing and
also in the case of failures.
For BATCH execution, Flink will do backtracking upwards from the failed
task(s) to see if intermediate results from
3. We could avoid force deletions from within Flink. If the user does it,
then we don't give guarantees.
I am fine with your current proposal. +1 for moving forward with it.
Cheers,
Till
On Thu, Oct 1, 2020 at 2:32 AM Yang Wang wrote:
> 2. Yes. This is exactly what I mean. Storing the HA
You could also try using streams to make it a little more concise:
directories.stream()
.map(directory ->createInputStream(environment, directory))
.reduce(DataStream::union)
.map(joinedStream -> joinedStream.addSink(kafka));
On 10/1/2020 9:48 AM, Chesnay Schepler wrote:
Do you know
Do you know the list of directories when you submit the job?
If so, then you can iterate over them, create a source for each
directory, union them, and apply the sink to the union.
private static DataStreamcreateInputStream(StreamExecutionEnvironment
environment, String directory) {
Kostas Kloudas created FLINK-19485:
--
Summary: Consider runtime-mode when creating the StreamGraph
Key: FLINK-19485
URL: https://issues.apache.org/jira/browse/FLINK-19485
Project: Flink
Hi Guys,
Got stuck with it please help me here
Regards,
Satya
On Wed, 30 Sep 2020 at 11:09 AM, Satyaa Dixit wrote:
> Hi Guys,
>
> Sorry to bother you again, but someone could help me here? Any help in
> this regard will be much appreciated.
>
> Regards,
> Satya
>
> On Tue, Sep 29, 2020 at 2:57
Hi everyone,
I have added a section for Performance Optimization to describe how to
improve the performance in the short-term and long-term
and sketch the future performance potential under the new window API.
Introducing the window API is just the first step, we will
continuously improve the
Hi Pengcheng,
Yes, the window TVF is part of the FLIP. Welcome to contribute and join the
discussion.
Regarding the SESSION window aggregation, users can use the existing
grouped session window function.
Best,
Jark
On Sun, 27 Sep 2020 at 21:24, liupengcheng
wrote:
> Hi Jark,
> Thanks
18 matches
Mail list logo