Hi folks,
When can we expect the release to be made available to the community?
On Wed, Dec 22, 2021 at 3:07 PM David Morávek wrote:
> Hi Debraj,
>
> we're currently not planning another emergency release as this CVE is not
> as critical for Flink users as the previous one. However, this patch
Hi
暂时还不支持,你看到的应该是未来规划的内容。
casel.chen 于2021年12月24日周五 20:50写道:
> 看文章介绍说Flink CDC 2.0 支持整库同步,见 https://www.jianshu.com/p/b81859d67fec
> 整库同步:用户要同步整个数据库只需一行 SQL 语法即可完成,而不用每张表定义一个 DDL 和 query。
> 想知道Flink CDC 2.0 整库同步如何实现?有没有例子?谢谢!
Hi Yuval,
It seems that scala code is not included in
`flink-table-api-scala-bridge_2.12-1.14.2-sources.jar` for now.
You can find all the compiled code in the compiled jar
(flink-table-api-scala-bridge_2.12-1.14.2.jar) for debugging.
If we need to also include scala code in the sources.jar, we
Hi Seth,
Thank you for confirming the issue due to the transition in 1.14.
For now, given my constraints, I will do a simple workaround and download
the whole dataset with java aws library.
For future reference though I would like to solve this
I am actually still on 1.12 at the moment and had
Hi!
ExecutionEnvironment 与 StreamExecutionEnvironment 均有 registerJobListener 方法
[1][2],可以传进一个 JobListener
[3],在作业提交以及完成的时候调用对应方法。当然,这需要你提交作业的客户端程序一直存在,直到作业完成并且对应函数被调用。
[1]
Hi John,
Sounds to me you have a Flink standalone cluster deployed directly on
physical hosts. If that is the case, use `t.m.flink.size` instead of
`t.m.process.size`. The latter does not limit the overall memory
consumption of the processes, and is only used for calculating how much
non-JVM
Dear Member
That is my bad and I do not edit the report function.
No more question about this case. Sorry to bother you~
Best regards
Zhen ZHANG(Allen) Finance Accounting and Management(FAM) Functional (FIN)
Department Enactus-Entrepreneurial Action Us (Former name: SIFE) TB114, The
Dear Member
I just get start learning flink and try the case, "Real Time Reporting with the
Table API "
(https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/try-flink/table_api/)
When I ran docker-compose, all containers worked except jobmanager which is
exited with 2.
The
Dear Member
I just get start learning flink and try the case, "Real Time Reporting with the
Table API"
(https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/try-flink/table_api/)
When I ran docker-compose, all containers worked except jobmanager which is
exited with 2.
I
Ok I tried taskmanager.memory.process.size: 7168m
It's worst, the task manager can barely start before it throws
java.lang.OutOfMemoryError: Metaspace
I will try...
taskmanager.memory.flink.size: 5120m
taskmanager.memory.jvm-metaspace.size: 2048m
On Sun, 26 Dec 2021 at 19:46, John Smith
Hi running Flink 1.10
I have
taskmanager.memory.flink.size: 6144m
taskmanager.memory.jvm-metaspace.size: 1024m
taskmanager.numberOfTaskSlots: 8
parallelism.default: 1
1- The host has a physical ram of 8GB. I'm better off just to configure
"taskmanager.memory.process.size" as 7GB and let flink
Hey David,
I placed the jar in flink folder but now I see some weird exceptions that
weren't before:
1. In flink logs I see non stop Failed to access job archive location for
path hdfs:/completed-jobs. java.io.FileNotFoundException: File
hdfs:/completed-jobs does not exist. And after couple
你说的是upsert-kafka的这两个参数吗?
sink.buffer-flush.max-rows
sink.buffer-flush.interval
确实能达到我想要的效果,但是会引入额外的kafka sink,另外还是从sink
kafka消费再写入mysql,链路有点长,最好是能在原来作业的基础上在sink前添加一个聚合算子。
在 2021-12-25 22:54:19,"郭伟权" 写道:
jdbc sink的buffer-flush不会减少写入的数据量,只是变成微批写入而已,mysql写入的压力并没有减少。
而我想要实现的效果是会减少写的数据量,因为同一个key的数据被聚合成最后一条。
在 2021-12-26 09:43:47,"Zhiwen Sun" 写道:
>不用那么复杂,正常的 insert select group by 即可, 一分钟写一次 mysql 就行。
>
>参考 JDBC sink [1] 中的 sink.buffer-flush.interval 和 sink.buffer-flush.max-rows
>参数
>
拿如下提交命令举例,pod-temlate.yaml是在和运行run-application这个命令相同的机器上面。Flink
client会自动把这个文件存放到ConfigMap,然后挂载给JM的
user jar(StateMachineExample.jar)是需要在镜像里面
注意:一般需要在镜像里面的都会使用local://这个schema,本地文件则不需要
bin/flink run-application -t kubernetes-application \
-Dkubernetes.cluster-id=my-flink-cluster \
15 matches
Mail list logo