hi
GenericInMemoryCatalog does not support settings now,
or you can refer to [1] for supported catalog details
and you can refer to [2] to supported types details.
"Kafka schema registry for schema" is under discussion [3],
which can be ready in 1.12.
sql client supports DDL to create a table
hi Evan,
感谢反馈,目前已经有一个issue [1]在跟踪该问题,可以关注后续进展
[1] https://issues.apache.org/jira/browse/FLINK-18545
ProcessWindowFunction#process is passed a Context object that contains
public abstract KeyedStateStore windowState();
public abstract KeyedStateStore globalState();
which are available for you to use for custom state. Whatever you store in
windowState is scoped to a window, and is cleared
在zeppelin中你可以指定insert 语句的job name,如下图,(对Zeppelin感兴趣的,可以加入钉钉群:32803524)
%flink.ssql(jobName="my job")
insert into sink_kafka select status, direction, cast(event_ts/10
as timestamp(3)) from source_kafka where status <> 'foo'
[image: image.png]
Evan 于2020年7月18日周六 下午5:47写道:
>
??
ProcessAllWindowFunction??ValueStateKeydProcessFounctiononTimerProcessAllWindowFunction??
??
Dear all:
How do I clear custom state in ProcessWindowsFunction? Because there is no
onTimer method in ProcessAllWindowFunction.
Thanks
Jiazhi
Prasanna,
The Flink project does not have an SQS connector, and a quick google search
hasn't found one. Nor does Flink have an HTTP sink, but with a bit of
googling you can find that various folks have implemented this themselves.
As for implementing SQS as a custom sink, if you need exactly
You should be able to tune your setup to avoid the OOM problem you have run
into with RocksDB. It will grow to use all of the memory available to it,
but shouldn't leak. Perhaps something is misconfigured.
As for performance, with the FSStateBackend you can expect:
* much better throughput and
Steve,
Your approach to debugging this sounds reasonable, but keep in mind that
the backpressure detection built into the WebUI is not infallible. You
could have backpressure that it doesn't detect.
FWIW, keyBy isn't an operator -- it's a declaration of how the operators
before and after the
代码大概是这样子的,一张kafka source表,一张es Sink表,最后通过tableEnv.executeSql("insert into
esSinkTable select ... from kafkaSourceTable")执行
任务提交后任务名称为“inset-into_某某catalog_某某database.某某Table”
这样很不友好啊,能不能我自己指定任务名称呢?
10 matches
Mail list logo