版本:1.12
问题:维表关联若是支持事件时间,维表需要有主键和时间属性,在满足这两个条件前提下,自定义维表若是实现LookupTableSource接口则优化会报异常:
Caused by: org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are
not enough rules to produce a node with desired properties:
convention=STREAM_PHYSICAL, FlinkRelDistributionTraitDef=any,
版本:1.12
问题:维表关联若是支持事件时间,维表需要有主键和时间属性,在满足这两个条件前提下,自定义维表若是实现LookupTableSource接口则优化会报异常:
Caused by: org.apache.calcite.plan.RelOptPlanner$CannotPlanException: There are
not enough rules to produce a node with desired properties:
convention=STREAM_PHYSICAL, FlinkRelDistributionTraitDef=any,
使用Flink的rest
api不可以嘛。我是standalone集群,写个python脚本,写了个list为expected_jobs,如果发现集群没这个job就报警。
Yun Tang 于2021年1月8日周五 上午10:53写道:
> 因为numRestarts 是一个累计值,所以你得区分当前值和之前的数值是否发生了增加,来区分是否发生了failover。
>
>
>
Hi Team,
I have a Flink application reading from Kafka. Each payload is a request
sent by a user containing a list of queries. What I would like to do is use
Flink to process the queries parallelly and aggregate results and send back
to the user.
For example, let's say we have two messages in
Hi all,
I am implementing a metric reporter for newrelic. I’d like it to support a job’s operator metrics that come with the flink framework out of the box. In order to ensure each metric is unique you can’t use the
metric name, you need to use the metric identifier. However, I am
One thing you could do is take the first N characters and hash the
remaining ones; I don't think there is a better solution at the moment.
The size of job/task/operator names is a rather fundamental issue that
makes a lot of things complicated (metrics, logging, UI), but we haven't
made any
Thanks Aljoscha for your prompt response. It means a lot to me
Could you also attach the code snippet for KafkaSource`, `KafkaSourceBuilder`,
and `OffsetInitializers` that you were referring to in your previous reply, for
my reference please to make it more clearer for me.
Kind regards,
hello,我一直在关注FLIP-27实现的全新source,更新1.12后发现了已经有新的kafkasource实现,目前在使用kafkasource与coordinator通信的时候遇到了困难。
Hi,
for your point 3. you can look at
`FlinkKafkaConsumerBase.setStartFromTimestamp(...)`.
Points 1. and 2. will not work with the well established
`FlinkKafkaConsumer`. However, it should be possible to do it with the
new `KafkaSource` that was introduced in Flink 1.12. It might be a bit
Hi Omkar,
Since version 1.12.0 you can configure the TaskManager's resource id via
`taskmanager.resource-id` [1]. Moreover, if not set, then it defaults to
rpcAddress:rpcPort and a 6 digit random suffix.
[1] https://issues.apache.org/jira/browse/FLINK-17579
Cheers,
Till
On Thu, Jan 7, 2021 at
Hi Larry,
the basic problem for your use case is that window boundaries are
inclusive for the start timestamp and exclusive for the end timestamp.
It's setup like this to ensure that consecutive tumbling windows don't
overlap. This is only a function of how our `WindowAssigner` works, so
it
So you're saying there is no logging output whatsoever being sent to
Elasticsearch? Did you try and see if the jar file is being picked up?
Are you still getting the pre-defined, text-based logging output?
Best,
Aljoscha
On 2021/01/07 17:04, bat man wrote:
Hi Team,
I have a requirement to
Thanks for the update!
Best,
Aljoscha
On 2021/01/07 16:45, Peter Huang wrote:
Hi,
We end up finding the root cause. Since a time point, two of the partitions
of the input topic don't have any data which causes the second window
operator in the pipeline can't receive the watermark of all of
Hi:
我用Table API创建了一张表,表字段如下:
key:FLOAT,
col1: FLOAT,
col2: FLOAT,
col3: FLOAT
使用如下SQL进行查询:
select * from flink where
(key < 5)
or
((key > 10 and key < 12)
or
(key in (15, 16, 17)) or (key > 18 and key <= 19))
执行时报了如下错误:
Exception in thread "main" java.lang.UnsupportedOperationException:
报告链接:https://s.tencent.com/research/bsafe/1215.html
--
Sent from: http://apache-flink.147419.n8.nabble.com/
最终自己在 maven-shade-plugin的方向上有了突破
对“Error creating shaded jar SubmissionPublisher”的报错,网上并没有找到什么有用的信息
但尝试升级 maven-shade-plugin 版本3.2.0 -> 3.2.4 之后可以解决
在 2021-01-07 20:55:14,"Dacheng" 写道:
>Hi,
>
>
>大家好,
>
>
>降级avro遇到的问题
>在1.12官方文档里提到avro目前使用1.10,但是可以按需降级到1.8.2
Hi Dongwon,
inferring the type information of Java classes is quite messy. At first, it
seems like that should work out the box as you are only using as the
type of the list, right? However, there is no way of knowing if you didn't
use a subclass of A. Of course, if A was final, it might be
检查点无错误,但检查点配置的后端sdk有报错,所以不清楚这个错误究竟有没有影响。下面是报错堆栈,帮忙分析下这个是写检查点数据的过程吗?如果是的话,404是什么意思。找不到?找不到啥。。。
com.baidubce.BceServiceException: Not Found (Status Code: 404; Error Code:
null; Request ID: 624d3468-8d7b-46f7-be5d-750c9039893d)
at
18 matches
Mail list logo