Hi, aj
> I was confused before as I was thinking the sink builder is called only once
> but it gets called for every batch request, correct me if my understanding is
> wrong.
You’re right that sink builder should be called only once rather than every
batch requests, could you post some code
Hello,
I have posted the same in stackoverflow but didnt get any response. So
posting it here for help.
https://stackoverflow.com/questions/62068787/flink-s3-write-performance-optimization?noredirect=1#comment109814428_62068787
Details:
I am working on a flink application on kubernetes(eks)
Hi Congxian,
Thank's for your feedback. You raised a point I also already thought about. As
"assignTimestampsAndWatermarks" creates an operator extending the standard
AbstractUdfStreamOperator, I can also implement a RichFunction watermark
assigner with full state access. In my case, I was
Hi.
If you can cache the data i state it’s the preferred way. Then you read all you
values from a store do a key by and store them in state in a coprocessfunction.
If you need to do a lookup for every row you have to use the AsyncIO function
Med venlig hilsen / Best regards
Lasse Nedergaard
The Open method would be a great! And close method could close it when
operator closes!
Also for external calls AsyncIO is a great operator. Give that a look.
Regards,
Taher Koitawala
On Sat, May 30, 2020, 10:17 PM Aissa Elaffani
wrote:
> Hello Guys,
> I want to enrich a data stream with
Hello Guys,
I want to enrich a data stream with some mongoDB data, and I am willing to
use the RichFlatMapFunction, and I am lost , i don't know where to
configure the connection with my MongoDB. Can anyone Help me in this ?
Best,
Aissa
Hi .
I'm a bit confused with this point in State TTL documentation:
" By default, expired values are explicitly removed on read, such as
ValueState#value, and periodically garbage collected in the background if
supported by the configured state backend. "
Does it mean, that if i have only one
Thanks Yun.
I have converted the code to use a keyed-processed function rather than a
flatMap and using register timer it worked.
On Fri, May 29, 2020 at 11:13 AM Yun Gao wrote:
> Hi,
>
> I think you could use *timer* to achieve that. In *processFunction*
> you could register a timer at
Thanks, It worked.
I was confused before as I was thinking the sink builder is called only
once but it gets called for every batch request, correct me if my
understanding is wrong.
On Fri, May 29, 2020 at 9:08 AM Leonard Xu wrote:
> Hi,aj
>
> In the implementation of ElasticsearchSink,
是用代码提交的jobmanager 是可以加载的 就是启动taskmanager 这个目录就没有创建, 界面如下 ,错误日志就是我下面贴出来的那个
在 2020-05-30 19:16:57,"462329521" <462329...@qq.com> 写道:
>你的提交命令是什么呢看样子是加载不到配置文件
>
>
>-- 原始邮件 --
>发件人: "程龙"<13162790...@163.com;
>发件时间: 2020-05-30 19:13
>收件人: "user-zh"主题:
你的提交命令是什么呢看样子是加载不到配置文件
-- 原始邮件 --
发件人: "程龙"<13162790...@163.com;
发件时间: 2020-05-30 19:13
收件人: "user-zh"
2020-05-30 19:07:31,418 INFO org.apache.flink.yarn.YarnTaskExecutorRunner
-
2020-05-30 19:07:31,418 INFO org.apache.flink.yarn.YarnTaskExecutorRunner
- Registered UNIX signal
It is because the jar conflict and i have fixed it.
I put flink-connector-kafka_2.11-1.10.0.jar in the flink lib directory.
Also in my project pom file has the dependency flink-connector-kafka and
builded as a fat jar
Thanks,
Lei
wangl...@geekplus.com.cn
发件人: Leonard Xu
发送时间:
Hi Satyam,
You are correct. Blink planner is built on top of DataStream, both for
batch and streaming.
Hence you cannot transform Table into DataSet if you are using blink
planner.
AFAIK, the community is working on the unification of batch and streaming.
And the the unification
will be Table
Thanks for your reply, Benchao Li.
While I can use the Blink planner in batch mode, I'd still have to work
with DataSet. Based on my limited reading it appears to me that DataStream
is being extended to support both batch and steaming use-cases with the
`isBounded` method in the StreamTableSource
15 matches
Mail list logo