+1
In 1.10, we have set default planner for SQL Client to Blink planner[1].
Looks good.
[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Set-default-planner-for-SQL-Client-to-Blink-planner-in-1-10-release-td36379.html
Best,
Jingsong Lee
On Wed, Apr 1, 2020 at 11:39 AM
+1 to make blink planner as default planner.
We should give blink planner more exposure to encourage users trying out
new features and lead users to migrate to blink planner.
Glad to see blink planner is used in production since 1.9! @Benchao
Best,
Jark
On Wed, 1 Apr 2020 at 11:31, Benchao Li
Hi Kurt,
It's excited to hear that the community aims to make Blink Planner default
in 1.11.
We have been using blink planner since 1.9 for streaming processing, it
works very well,
and covers many use cases in our company.
So +1 to make it default in 1.11 from our side.
Kurt Young 于2020年4月1日周三
Hi Dev and User,
Blink planner for Table API & SQL is introduced in Flink 1.9 and already be
the default planner for
SQL client in Flink 1.10. And since we already decided not introducing any
new features to the
original Flink planner, it already lacked of so many great features that
the community
Thanks a lot!
Jark Wu 于2020年4月1日周三 上午1:13写道:
> Hi Jing,
>
> I created https://issues.apache.org/jira/browse/FLINK-16889 to support
> converting from BIGINT to TIMESTAMP.
>
> Best,
> Jark
>
> On Mon, 30 Mar 2020 at 20:30, jingjing bai
> wrote:
>
>> Hi jarkWu!
>>
>> Is there a FLIP to do so? I'm
Hi Jing,
I created https://issues.apache.org/jira/browse/FLINK-16889 to support
converting from BIGINT to TIMESTAMP.
Best,
Jark
On Mon, 30 Mar 2020 at 20:30, jingjing bai
wrote:
> Hi jarkWu!
>
> Is there a FLIP to do so? I'm very glad to learn from idea.
>
>
> Best,
> jing
>
> Jark Wu 于2020年
Forwarding Seth's answer to the list
-- Forwarded message -
From: Seth Wiesman
Date: Tue, Mar 31, 2020 at 4:47 PM
Subject: Re: Complex graph-based sessionization (potential use for stateful
functions)
To: Krzysztof Zarzycki
Cc: user ,
Hi Krzysztof,
This is a great use case fo
Hi, I'm trying to run a job in a Flink cluster in Amazon EMR from java code
but I'm having some problems
This is how I create the cluster:
StepConfig copyJarStep = new StepConfig()
.wit
I could not make it work as I wanted with taskmanager.numberOfTaskSlots to
2, but I found a way for running them in parallel, just creating a cluster
for each job since they are independent
Thanks
On Mon, Mar 30, 2020 at 4:22 PM Gary Yao wrote:
> Can you try to set config option taskmanager.num
Currently I've Flink consumer with following properties, Flink consumes
record at around 400 messages/sec at start of program but later on as
numBuffersOut exceeds 100, data rate falls to 200messages/sec. I've
set parallelism to only 1, it's Avro based consumer and checkpointing is
disabled. D
Hi
We have in both Flink 1.9.2 and 1.10 struggled with random deserialze and Index
out of range exception in one of our job. We also get out of memory exceptions.
We have now identified it as a latency tracking together with broadcast state
Causing the problem. When we do integration testing lo
The container cut-off accounts for not only metaspace, but also native
memory footprint such as thread stack, code cache, compressed class space.
If you run streaming jobs with rocksdb state backend, it also accounts for
the rocksdb memory usage.
The consequence of less cut-off depends on your env
Hi community,
Now I am optimizing the flink 1.6 task memory configuration. I see the
source code, at first, the flink task config the cut-off memory, cut-off
memory = Math.max(600,containerized.heap-cutoff-ratio * TaskManager
Memory), containerized.heap-cutoff-ratio default value is 0.25. For
exa
Hi Tzu-Li,
thanks a lot for your answer. I will try this!
However, I was looking for something that does fully simulate a Flink
cluster, including job-manager to task manager serialization issues and
full isolation of the task managers (I guess in the MiniClusterResource, we
are still on the same
Hello Mike,
thanks for the info.
I tried to do sth similar in Java. Not there yet but I think that should be
feasible.
However, like you said, that means additional operations for each event.
Yesterday I managed to find another solution: create the type information
outside of the class and pass
Hi, there.
In the latest version, the calculator supports dynamic options. You
could append all your dynamic options to the end of "bin/calculator.sh
[-h]".
Since "-tm" will be deprecated eventually, please replace it with
"-Dtaskmanager.memory.process.size=".
Best,
Yangze Guo
On Mon, Mar 30, 20
16 matches
Mail list logo