-建表语法如下
String kafka = "CREATE TABLE `电话` " +
"(`rowID` VARCHAR(255),`名称` STRING,`手机` VARCHAR(255),`座机` VARCHAR(255), " +
" PRIMARY KEY (`rowID`) NOT ENFORCED ) " +
" WITH " +
"('connector' = 'jdbc', " +
" 'driver' = 'com.mysql.cj.jdbc.Driver', " +
" 'url' =
从 region 改为 full 会扩容单个 Task 故障的影响范围,可以参考社区文档:
https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/ops/state/task_failure_recovery/
Best,
Weihua
On Fri, Feb 24, 2023 at 2:12 PM 唐世伟 wrote:
> 谢谢回复,我看日志已经超出来yarn保存的期限被删了。另外Failover从region改为full。是不是能避免这个问题啊?
>
> > 2023年2月23日
谢谢回复,我看日志已经超出来yarn保存的期限被删了。另外Failover从region改为full。是不是能避免这个问题啊?
> 2023年2月23日 上午11:36,Weihua Hu 写道:
>
> Hi,
>
> 在 Cancel 其他 task 时会先将 task 状态置为 cancelling,这时 task 失败是不会二次触发 Failover 的。
> 可以检查下是不是作业划分了多个 region,多个 region 的异常是统一计数的。
>
> 或者可以贴一下日志吗?
>
>
> Best,
> Weihua
>
>
> On Thu, Feb 23,
> 你说的这个参数我看了默认值不是auto吗?需要我显式地指定为force?
Sink upsert materialize would be applied in the following circumstances:
1. `TABLE_EXEC_SINK_UPSERT_MATERIALIZE` set to FORCE and sink's primary key
nonempty.
2. `TABLE_EXEC_SINK_UPSERT_MATERIALIZE` set to AUTO and sink's primary key
doesn't contain upsert
你说的这个参数我看了默认值不是auto吗?需要我显式地指定为force?
Because of the disorder of ChangeLog data caused by Shuffle in distributed
system, the data received by Sink may not be the order of global upsert. So add
upsert materialize operator before upsert sink. It receives the upstream
changelog records and
退订
The Apache Flink community is very happy to announce the release of Apache
Flink Kubernetes Operator 1.4.0.
The Flink Kubernetes Operator allows users to manage their Apache Flink
applications and their lifecycle through native k8s tooling like kubectl.
Release highlights:
- Flink Job