是的,对应参数没有填写正确,感谢;
另外请教,udf函数是不是目前不支持python的可变参数,我使用可变参数依然会报错参数不对的问题。
在 2020-06-01 11:01:34,"Dian Fu" 写道:
>The input types should be as following:
>
>input_types=[DataTypes.STRING(), DataTypes.ARRAY(DataTypes.STRING())]
>
>Regards,
>Dian
>
>> 在 2020年6月1日,上午10:49,刘亚坤 写道:
>>
>>
org.apache.flink.configuration.IllegalConfigurationException: The Flink config
file
'/tmp/hadoop-bjhl/nm-local-dir/usercache/bjhl/appcache/application_1590820026922_0020/container_1590820026922_0020_01_08/flink-conf.yaml'
Hi, Nick.
Do you mean that you manually execute "kinit -R" to renew the ticket cache?
If that is the case, Flink already sets the "renewTGT" to true. If
everything is ok, you do not need to do it yourself. However, it seems
this mechanism has a bug and this bug is not fixed in all JDK
versions.
这些问题一两句话也说不清楚,建议看看 Flink 官网的文档和博客。
在 2020-06-01 11:08:27,"xyq" 写道:
>hello 您好,
>打扰了请教几个问题,:
>
>1.flink窗口的延时数据怎么处理 ,假如我的数据写入kafka或clickhouse,侧输出流可以做到吗?
>
>2.flink怎么做到端到端恰好一次,是不是sink的组件本身得支持恰好一次,clickhouse支持恰好一次吗?
>
>3.flink突然发现之前跑的数据有异常,怎么从之前恢复数据?
>
Hi,
On Mon, Jun 1, 2020 at 5:47 AM Omid Bakhshandeh
wrote:
> Hi,
>
> I'm very confused about StateFun 2.0 new architecture.
>
> Is it possible to have both remote and embedded functions in the same
> deployment?
>
Yes that is possible. Embedded functions simply run within the Flink
StateFun
Hi all,
This is an interesting topic. Schema inference will be the next big feature
planned in the next release.
I added this thread link into FLINK-16420.
I think the case of Guodong is schema evolution, which I think there is
something to do with schema inference.
I don't have a clear idea for
Hi Vijay,
The error message suggests that another task manager (10.127.106.54) is not
responding. This could happen when the remote task manager has failed or
under severe GC pressure. You would need to find the log of the remote task
manager to understand what is happening.
Thank you~
Xintong
hello 您好,
打扰了请教几个问题,:
1.flink窗口的延时数据怎么处理 ,假如我的数据写入kafka或clickhouse,侧输出流可以做到吗?
2.flink怎么做到端到端恰好一次,是不是sink的组件本身得支持恰好一次,clickhouse支持恰好一次吗?
3.flink突然发现之前跑的数据有异常,怎么从之前恢复数据?
4.flink不借助外部组件怎么算日活跃人数(假设数据量还很大)?
The input types should be as following:
input_types=[DataTypes.STRING(), DataTypes.ARRAY(DataTypes.STRING())]
Regards,
Dian
> 在 2020年6月1日,上午10:49,刘亚坤 写道:
>
> 目前在学习使用pyflink的Table api,请教一个pyflink udf函数报错问题:
>
> @udf(input_types=[DataTypes.STRING()], result_type=DataTypes.STRING())
> def
The input types should be as following:
input_types=[DataTypes.STRING(), DataTypes.ARRAY(DataTypes.STRING())]
Regards,
Dian
> 在 2020年6月1日,上午10:49,刘亚坤 写道:
>
> 目前在学习使用pyflink的Table api,请教一个pyflink udf函数报错问题:
>
> @udf(input_types=[DataTypes.STRING()], result_type=DataTypes.STRING())
> def
目前在学习使用pyflink的Table api,请教一个pyflink udf函数报错问题:
@udf(input_types=[DataTypes.STRING()], result_type=DataTypes.STRING())
def drop_fields(message, *fields):
import json
message = json.loads(message)
for field in fields:
message.pop(field)
return
目前在学习使用pyflink的Table api,请教一个pyflink udf函数报错问题:
@udf(input_types=[DataTypes.STRING()], result_type=DataTypes.STRING())
def drop_fields(message, *fields):
import json
message = json.loads(message)
for field in fields:
message.pop(field)
return
Hi Vasily
After Flink-1.10, state will be cleaned up periodically as CleanupInBackground
is enabled by default. Thus, even you never access some specific entry of state
and that entry could still be cleaned up.
Best
Yun Tang
From: Vasily Melnik
Sent: Saturday,
我怀疑是JM没有把flink-conf.yaml注册为local resource,最根本的原因应该还是你的提交方式
有问题,导致有些文件没有ship过去。如果你可以把JM的log以及launch_container.sh脚本发出来,
应该可以看出来原因
Best,
Yang
Leonard Xu 于2020年6月1日周一 上午9:39写道:
> 邮件里的图片经常看不到,可以用图床工具,放链接。
>
> > 在 2020年6月1日,09:27,wangweigu...@stevegame.cn 写道:
> >
> >
> > 这个邮件好像图片都看不到啊,你们能看到不?
> >
>
Hi
1.??30s??1s
2.Congxian??
3.MaxOutOfOrderness =
邮件里的图片经常看不到,可以用图床工具,放链接。
> 在 2020年6月1日,09:27,wangweigu...@stevegame.cn 写道:
>
>
> 这个邮件好像图片都看不到啊,你们能看到不?
>
>
>
> 发件人: 程龙
> 发送时间: 2020-05-30 19:20
> 收件人: user-zh
> 主题: Re:re:提交flink代码 无法创建taskmanager flink web界面一致停留在created状态 ,日志报错如下
>
>
> 是用代码提交的jobmanager 是可以加载的 就是启动taskmanager 这个目录就没有创建,
这个邮件好像图片都看不到啊,你们能看到不?
发件人: 程龙
发送时间: 2020-05-30 19:20
收件人: user-zh
主题: Re:re:提交flink代码 无法创建taskmanager flink web界面一致停留在created状态 ,日志报错如下
是用代码提交的jobmanager 是可以加载的 就是启动taskmanager 这个目录就没有创建, 界面如下 ,错误日志就是我下面贴出来的那个
在 2020-05-30 19:16:57,"462329521" <462329...@qq.com> 写道:
Hi all,
To deal with corrupted messages that can leak into the data source once in
a while, we implement a custom DefaultKryoSerializer class as below that
catches exceptions. The custom serializer returns null in read(...) method
when it encounters exception in reading. With this implementation,
Hi,
I'm very confused about StateFun 2.0 new architecture.
Is it possible to have both remote and embedded functions in the same
deployment?
Is there a tutorial that shows the deployment of the two types in the same
Kubernetes cluster alongside with Flink(possible in Python and Java)?
Also, is
Hi David,
The avg size of each file is around 30KB and I have checkpoint interval of
5 minutes. Some files are even 1 kb, because of checkpoint some files are
merged into 1 big file around 300MB.
With 120 million files and 4Tb, if the rate of transfer is 300 per minute,
it is taking weeks to
Hi Venkata.
300 requests per minute look like a 200ms per request, which should be a
normal response time to send a file if there isn't any speed limitation
(how big are the files?).
Have you changed the parallelization to be higher than 1? I also recommend
to limit the source parallelization,
21 matches
Mail list logo