+1, making concepts clear and understandable to all the developers is a
very important thing.
Thanks Leonard for driving this.
Best,
Kurt
On Tue, Aug 25, 2020 at 10:47 AM Rui Li wrote:
> +1. Thanks Leonard for driving this.
>
> On Tue, Aug 25, 2020 at 10:10 AM Jark Wu wrote:
>
> > Thanks
The Apache Flink community is very happy to announce the release of Apache
Flink 1.10.2, which is the first bugfix release for the Apache Flink 1.10
series.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
Add some more additional information.
A. Refer to Yangze's answer.
B. If you are using a native K8s integration and enable the zookeeper HA,
whenever you want to upgrade, stop the Flink cluster
and start a new one with same clusterId. It will recover from the latest
checkpoint.
If you are
Hi Aljoscha,
I'm lightly leaning towards keeping the 0.10 connector, for Kafka 0.10 still
has a steady user base in my observation.
But if we drop 0.10 connector, can we ensure the users would be able to
smoothly migrate to 0.11 connector/universal connector?
If I remember correctly, the
+1. Thanks Leonard for driving this.
On Tue, Aug 25, 2020 at 10:10 AM Jark Wu wrote:
> Thanks Leonard!
>
> +1 to the FLIP.
>
> Best,
> Jark
>
> On Tue, 25 Aug 2020 at 01:41, Fabian Hueske wrote:
>
>> Leonard, Thanks for updating the FLIP!
>>
>> +1 to the current version.
>>
>> Thanks, Fabian
Thanks Leonard!
+1 to the FLIP.
Best,
Jark
On Tue, 25 Aug 2020 at 01:41, Fabian Hueske wrote:
> Leonard, Thanks for updating the FLIP!
>
> +1 to the current version.
>
> Thanks, Fabian
>
> Am Mo., 24. Aug. 2020 um 17:56 Uhr schrieb Leonard Xu :
>
>> Hi all,
>>
>> I would like to start the
Shuiqiang Chen created FLINK-19041:
--
Summary: Add dependency management for ConnectedStream in Python
DataStream API.
Key: FLINK-19041
URL: https://issues.apache.org/jira/browse/FLINK-19041
Project:
Leonard, Thanks for updating the FLIP!
+1 to the current version.
Thanks, Fabian
Am Mo., 24. Aug. 2020 um 17:56 Uhr schrieb Leonard Xu :
> Hi all,
>
> I would like to start the vote for FLIP-132 [1], which has been discussed
> and
> reached a consensus in the discussion thread [2].
>
> The
++dev@flink.apache.org
On Mon, Aug 24, 2020, 7:31 PM sidhant gupta wrote:
> Hi User
>
> How jobmanager and task manager communicates with each other ? How to set
> connection between jobmanager and task manager running in different/same
> ec2 instance ? Is it http or tcp ? How the service
Hi all,
I would like to start the vote for FLIP-132 [1], which has been discussed and
reached a consensus in the discussion thread [2].
The vote will be open until 27th August (72h), unless there is an objection or
not enough votes.
Best,
Leonard
[1]
Piotr Nowojski created FLINK-19040:
--
Summary: SourceOperator is not closing SourceReader
Key: FLINK-19040
URL: https://issues.apache.org/jira/browse/FLINK-19040
Project: Flink
Issue Type:
Thanks Fabian, Rui and Jark for the nice discussion!
It seems everyone involved in this discussion has reached a consensus.
I will start another vote thread later.
Best,
Leonard
> 在 2020年8月24日,20:54,Fabian Hueske 写道:
>
> Hi everyone,
>
> Thanks for the good discussion!
>
> I'm fine
Ayrat Hudaygulov created FLINK-19039:
Summary: Parallel Flink Kafka Consumers compete with each other
Key: FLINK-19039
URL: https://issues.apache.org/jira/browse/FLINK-19039
Project: Flink
Hi all,
this thought came up on FLINK-17260 [1] but I think it would be a good
idea in general. The issue reminded us that Kafka didn't have an
idempotent/fault-tolerant Producer before Kafka 0.11.0. By now we have
had the "modern" Kafka connector that roughly follows new Kafka releases
for
Hi everyone,
Thanks for the good discussion!
I'm fine keeping the names "event-time temporal join" and "processing-time
temporal join".
Also +1 for Rui's proposal using "versioned table" for versioned dynamic
table and "regular table" for regular dynamic table.
Thanks,
Fabian
Am Mo., 24.
Hi everyone,
This is an update for release 1.10.2.
The release process is currently pending on the PR[1] to have flink docker
images published on Docker Hub.
After it is merged, we can shortly get Flink 1.10.2 released.
[1] https://github.com/docker-library/official-images/pull/8599
Thanks,
Zhu
Dian Fu created FLINK-19038:
---
Summary: It doesn't support to call Table.fetch() continuously
Key: FLINK-19038
URL: https://issues.apache.org/jira/browse/FLINK-19038
Project: Flink
Issue Type: Bug
The heap dump did not show anything too suspicious. The only thing I
noticed is that there are 13 ChildFirstClassLoaders whereas there are only
6 Task instances in the heap dump. Are you running all 13 tasks on the same
TaskExecutor?
Cheers,
Till
On Mon, Aug 24, 2020 at 2:01 PM Till Rohrmann
What could also cause the problem is that the metaspace memory budget is
configured too tightly. Here is a pointer to increasing the metaspace size
[1].
[1]
https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_trouble.html#outofmemoryerror-metaspace
Cheers,
Till
On Mon, Aug 24,
Hi, Mazen
AFAIK, we now have two K8s integration, native[1] and standalone[2]. I
guess the native K8s integration is what you mean by active K8S
integration.
Regarding the reactive mode, I think it is still working in progress,
you could refer to [3].
[1]
Hi,
could you share with us the Flink cluster logs? This would help answering a
lot of questions around your setup and the Flink version you are using.
Thanks a lot!
Cheers,
Till
On Mon, Aug 24, 2020 at 10:48 AM 耿延杰 wrote:
> Still failed after every 12 tasks.
> And the exception stack of
Robert Metzger created FLINK-19037:
--
Summary: Introduce proper IO executor in Dispatcher
Key: FLINK-19037
URL: https://issues.apache.org/jira/browse/FLINK-19037
Project: Flink
Issue Type:
To the best of my knowledge, for Flink deployment on Kubernetes we have two
options as of now : (1) active K8S integration with separate job manager per
job and (2) reactive container mode with auto rescale based on some metrics:
Could you please give me on the hint on the below:
A - Are the two
Roc Marshal created FLINK-19036:
---
Summary: Translate page 'Application Profiling & Debugging' of
'Debugging & Monitoring' into Chinese
Key: FLINK-19036
URL: https://issues.apache.org/jira/browse/FLINK-19036
Hi all,
After the discussion in [1], I would like to open a voting thread for
FLIP-134 [2] which discusses the semantics that the DataStream API
will expose when applied on a bounded input.
The vote will be open until 27th August (72h), unless there is an
objection or not enough votes.
Cheers,
Dawid Wysakowicz created FLINK-19035:
Summary: Remove deprecated DataStream#fold() method and all
related classes
Key: FLINK-19035
URL: https://issues.apache.org/jira/browse/FLINK-19035
Project:
Thanks a lot for the discussion!
I will open a voting thread shortly!
Kostas
On Mon, Aug 24, 2020 at 9:46 AM Kostas Kloudas wrote:
>
> Hi Guowei,
>
> Thanks for the insightful comment!
>
> I agree that this can be a limitation of the current runtime, but I
> think that this FLIP can go on as
Dawid Wysakowicz created FLINK-19034:
Summary: Remove deprecated
StreamExecutionEnvironment#set/getNumberOfExecutionRetries
Key: FLINK-19034
URL: https://issues.apache.org/jira/browse/FLINK-19034
Dawid Wysakowicz created FLINK-19033:
Summary: Cleanups of DataStream API
Key: FLINK-19033
URL: https://issues.apache.org/jira/browse/FLINK-19033
Project: Flink
Issue Type: Improvement
Dawid Wysakowicz created FLINK-19032:
Summary: Remove deprecated RuntimeContext#getAllAcumullators
Key: FLINK-19032
URL: https://issues.apache.org/jira/browse/FLINK-19032
Project: Flink
Dawid Wysakowicz created FLINK-19031:
Summary: Remove deprecated setStateBackend(AbstactStateBackend)
Key: FLINK-19031
URL: https://issues.apache.org/jira/browse/FLINK-19031
Project: Flink
Hi all,
## Motivation
FLIP-63 [1] introduced initial support for PARTITIONED BY clause to an
extent that let us support Hive's partitioning.
But this partition definition is completely specific to Hive/File
systems, with the continuous development of the system, there are new
requirements:
-
Still failed after every 12 tasks.
And the exception stack of failed tasks is different.
such as the recent failed tasks's exception info:
Caused by: java.lang.OutOfMemoryError: Metaspace
at java.lang.ClassLoader.defineClass1(Native Method)
at
Additional info:
The exception info in Flink Manager Page:
Caused by: java.lang.OutOfMemoryError: Metaspace
at java.lang.ClassLoader.defineClass1(Native Method)
at
java.lang.ClassLoader.defineClass(ClassLoader.java:757)
at
Which Flink version are you using?
On 24/08/2020 10:20, ?? wrote:
Hi,
I catch?0?2 "OutOfMemoryError: Metaspace" on Batch Task When Write into
Clickhouse.
Attached?0?2 *.java file?0?2 is my task code.
And I find that, after running 12 tasks, the 13th task will be failed.
And the
Hi,
thanks for reaching out to the community. Could you share a bit more
details about the cluster setup (session cluster, per-job cluster
deployment), Flink version and maybe also share the logs with us? Sharing
your user code and the libraries you are using can also be helpful in
figuring out
Hi,
I catch "OutOfMemoryError: Metaspace" on Batch Task When Write into
Clickhouse.
Attached *.java file is my task code.
And I find that, after running 12 tasks, the 13th task will be failed. And the
exception always is "OutOfMemoryError: Metaspace". see "task-failed.png"
I
Hi Flink devs,
We have a few upcoming / implemented features for Stateful Functions on the
radar, and would like to give a heads up on what to expect for the next
release:
1. Upgrade support for Flink 1.11.x. [FLINK-18812]
2. Fine grained control on remote state configuration, such as state TTL.
Hi Guowei,
Thanks for the insightful comment!
I agree that this can be a limitation of the current runtime, but I
think that this FLIP can go on as it discusses mainly the semantics
that the DataStream API will expose when applied on bounded data.
There will definitely be other FLIPs that will
Hi, Klou
Thanks for your proposal. It's a very good idea.
Just a little comment about the "Batch vs Streaming Scheduling". In the
AUTOMATIC execution mode maybe we could not pick BATCH execution mode even
if all sources are bounded. For example some applications would use the
Hi everyone,
I would like to start a discussion thread on "Support Pandas UDAF in
PyFlink"
Pandas UDF has been supported in FLINK 1.11 (FLIP-97[1]). It solves the
high serialization/deserialization overhead in Python UDF and makes it
convenient to leverage the popular Python libraries such as
41 matches
Mail list logo