Hi,
就像上面文档描述的,如果是多个字段组合成partition,可以在DDL中通过partition.time-
extractor.timestamp-pattern将多个字段按照自己的partition格式需求进行组装。
CREATE TABLE fs_table (
user_id STRING,
order_amount DOUBLE,
dt STRING,
`hour` STRING
) PARTITIONED BY (dt, `hour`) WITH (
'connector'='filesystem',
'path'='...',
The Apache Flink community is very happy to announce the release of Apache
flink-connector-gcp-pubsub v3.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate
The Apache Flink community is very happy to announce the release of Apache
flink-connector-elasticsearch v1.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and
The Apache Flink community is very happy to announce the release of Apache
flink-connector-opensearch v1.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate
The Apache Flink community is very happy to announce the release of Apache
flink-connector-pulsar v4.0.0. This release is compatible with Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
The Apache Flink community is very happy to announce the release of Apache
flink-shaded v17.0.
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate data streaming
applications.
The release is available for download at:
The Apache Flink community is very happy to announce the release of Apache
flink-connector-rabbitmq v3.0.1. This release is compatible with Flink
1.16.x and Flink 1.17.x
Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available, and accurate
好滴呀 谢谢各位老师指导
| |
小昌同学
|
|
ccc0606fight...@163.com
|
回复的原邮件
| 发件人 | yidan zhao |
| 发送日期 | 2023年4月21日 10:50 |
| 收件人 | |
| 主题 | Re: 不同的流程使用不同的并行度 |
从哪方面考虑,主要根据每个算子的工作复杂性,复杂性越高自然设置越高的并发好点。 其次实际运行时,也可以根据反压情况找到瓶颈进行调整。
Shammon FY 于2023年4月21日周五 09:04写道:
Hi
DataStream作业设置并发度有两种方式
1.
如果需要取消订阅 user-zh@flink.apache.org 邮件组,请发送任意内容的邮件到
user-zh-unsubscr...@flink.apache.org ,参考[1]
[1] https://flink.apache.org/zh/community/
On Wed, May 10, 2023 at 1:38 AM Zhanshun Zou wrote:
> 退订
>
??hivedt='20200520',?? flinkSQL ??hive
??'-mm-dd hh:mm:ss'
??partition.time-extractor.timestamp-pattern
'mmdd hh:mm:ss'??flink??1.13
Hi casel.chen,
我理解你的意思是:
希望在ThirdPartyPaymentStream一条数据达到的30分钟后,*再触发查询*
,如果此时该数据在PlatformPaymentStream中还未出现,说明超时未支付,则输入到下游。而不是等ThirdPartyPaymentStream数据达到时再判断是否超时,因为此时虽然超时达到,但是也算已支付,没必要再触发报警了。
如果是流计算,可以采用timer定时器延时触发。
对于sql, 我个人的一个比较绕的想法是(供参考,不一定对):是通过Pulsar
11 matches
Mail list logo