嗯,执行计划确实可以看到一些信息,只是还想知道是否还有比较好的方式能看具体有哪些底层函数以及状态,从而更方便去分析性能相关问题的
回复的原邮件
| 发件人 | Shammon FY |
| 日期 | 2023年08月28日 12:05 |
| 收件人 | user-zh@flink.apache.org |
| 抄送至 | |
| 主题 | Re: flink sql语句转成底层处理函数 |
如果想看一个sql被转换后包含哪些具体执行步骤,可以通过explain语法[1]查看执行计划
[1]
You can also check the apache paimon project https://paimon.apache.org/
(previously known as Flink Table Store).
Might help in some scenarios
On Mon, Aug 28, 2023 at 5:05 AM liu ron wrote:
> Hi, Nirmal
>
> Flink SQL is standard ANSI SQL and extends upon it. Flink SQL provides
> rich Join and
> - Why does a Flink `CREATE TABLE` from Protobuf require the entire table
> column structure to be defined in SQL again? Shouldn't fields be inferred
> automatically from the provided protobuf class?
I agree that auto schema inference is a good feature. The reason why
ProtoBuf Format does not
如果想看一个sql被转换后包含哪些具体执行步骤,可以通过explain语法[1]查看执行计划
[1]
https://nightlies.apache.org/flink/flink-docs-master/docs/dev/table/sql/explain/
On Sun, Aug 27, 2023 at 5:23 PM 海风 <18751805...@163.com> wrote:
> 请教下,是否可以去查询一个flink
> sql提交运行后,flink给它转成的底层处理函数到底是什么样的,假如涉及状态计算,flink给这个sql定义的状态变量是哪些呢?
>
>
>
Hi, longfent.
We could use `withIdleness`[1] to deal with the idle sources.
Best,
Hang
[1]
https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/datastream/event-time/generating_watermarks/#dealing-with-idle-sources
longfeng Xu 于2023年8月27日周日 14:01写道:
> Hello,
>
> The issue I’m
There's friction with using scala/java protobuf and trying to convert them
into a Flink Table from a DataStream[ProtobufObject].
Scenario:
Input is a DataStream[ProtobufObject] from a kafka topic that we read data
from and deserialise into Protobuf objects (scala case classes or
alternatively Java
Hi, Ravi
I have deep dive into the source code[1], the parallelism of the Sink
operator is consistent with its inputs, so I suggest you check the
parallelism of the upstream operators.
[1]
Hi, Nirmal
Flink SQL is standard ANSI SQL and extends upon it. Flink SQL provides rich
Join and Aggregate syntax including Regular Streaming Join, Interval Join,
Temporal Join, Lookup Join[2], Window Join[3], unbounded group aggregate[4]
and window aggregate[5], and so on. Theoretically, it can
Hello!
We have a use case requirement to implement complex joins and aggregation
on multiple sql tables. Today, it is happening at SQLServer level which is
degrading the performance of SQLServer Database.
Is it a good idea to implement it through Apache Flink Table API for
real-time data joins?
Hi,I am trying to use the Table API which will convert the Table data into
Datastream. API is StreamTableEnvironment.toChangelogStream(Table table).I have
noticed that its parallelism is always single i.e. One (1). How can set more
than one?
If it is intended to execute with a single thread
请教下,是否可以去查询一个flink
sql提交运行后,flink给它转成的底层处理函数到底是什么样的,假如涉及状态计算,flink给这个sql定义的状态变量是哪些呢?
11 matches
Mail list logo