First of all, there is this PR
https://github.com/apache/flink/pull/9581 that may be interesting to
Second, I think you have to keep in mind that the hourly bucket
reporting will be per-subtask. So if you have parallelism of 4, each
of the 4 tasks will report individually that
I managed to fix it however ran into another problem that I could
appreciate help in resolving.
it turns out that the username for all three nodes was different. having
the same username for them fixed the issue. i.e
Recently we decided to upgrade from flink 1.7.2 to 1.8.1. After an upgrade
our task managers started to fail with SIGSEGV error from time to time.
In process of adjusting the code to 1.8.1, we noticed that there were some
changes around TypeSerializerSnapshot interface and its
Thanks Till, I will continue to follow this issue and see what we can do.
Till Rohrmann 于2019年9月11日周三 下午5:12写道：
> Suggestion 1 makes sense. For the quick termination I think we need to
> think a bit more about it to find a good solution also to support strict
Thank’s for sharing your thought’s. I’ll give it a try.
From: Fabian Hueske
Sent: Mittwoch, 11. September 2019 09:55
Subject: Re: Filter events based on future events
I wanted to reach out to you and ask how many of you are using a customized
RestartStrategy in production jobs.
We are currently developing the new Flink scheduler which interacts
with restart strategies in a different way. We have to re-design the
interfaces for the new
Hi Anyang and Till,
I think we agreed on making the interval configurable in this case. Let me
revise the current PR. You can review it after that.
On Thu, Sep 12, 2019 at 12:53 AM Anyang Hu wrote:
> Thanks Till, I will continue to follow this issue and see what we
I'm trying to add authentication to the web dashboard using `nginx`. Flink's
`rest.port` is set to `8081`, connection to this port is disabled by firewall.
I'm using `nginx` to listen to requests on port 8080 and redirect to port 8081
them with username/password authentication (Port
Thanks Oytun for the reply!
Sorry for not have stated it clearly. When saying "customized
RestartStrategy", we mean that users implement an
themselves and use it by configuring like "restart-strategy:
We are using custom restart strategy like this:
*M O T A W O R D*
The World's Fastest Human Translation Platform.
oy...@motaword.com — www.motaword.com
On Thu, Sep 12, 2019 at
Turns out there was some other deserialization problem unrelated to this.
On Mon, Sep 9, 2019 at 11:15 AM Catlyn Kong wrote:
> Hi fellow streamers,
> I'm trying to support avro BYTES type in my flink application. Since
> ByteBuffer isn't a supported type, I'm converting the field to an
Thanks for letting me know. I have found it but we also need the option to
register Avro Schema’s and use the registry when we write to Kafka. So we will
create a serialisation version and when it works implement it into Flink and
create a pull request for the community.
Just for a Kafka source:
- There is also a version of this schema available that can lookup the
writer’s schema (schema which was used to write the record) in Confluent
I am compiling a new version of Mesos and when I test it again I will reply
here if I found an error.
On Wed, 11 Sep 2019, 09:22 Gary Yao, wrote:
> Hi Felipe,
> I am glad that you were able to fix the problem yourself.
> > But I suppose that Mesos will allocate Slots and Task
Thanks a lot everyone for the warm welcome. Happy Mid-autumn Festival!
Leonard Xu 于2019年9月12日周四 上午11:05写道：
> Congratulations Zili Chen ! ！
> Leonard Xu
> > On 2019年9月12日, at 上午11:02, Yun Tang wrote:
> > Congratulations Zili
> > Best
> > Yun Tang
I have a standalone cluster. I have added my own library(jar file) to the
lib/ folder in flink . I submit my job from cli after I start the cluster.
Now I want to externalize a property file which has to be read by this
library. Since this library is loaded by flink's classloader and not the
I came across an issue during job submission via Flink Cli Client with Flink
1.7.1 in high availability mode.
Flink version:: 1.7.1
Mode:: High availability with 2 jobmanagers
./bin/flink run -d -c MyExample /myexample.jar
The CLI runs inside a K8s job and
val result = tableEnv.sqlQuery(s"SELECT COUNT(0) as
pv,COUNT(distinct curuserid)" +
s" as uv,TUMBLE_END(rowtime, INTERVAL '10' MINUTE) FROM
守护 <346531...@qq.com> 于2019年9月12日周四 下午2:35写道：
> waterMarkStream, 'curuserid,'timelong,'rowtime.rowtime)
> val result =
在 2019/9/5 下午4:08，“陈赋赟” 写入:
伊始是需要统计90天事件窗口中用户浏览事件总数，如果是在近30天内有浏览事件则累加1次，在30天内没有浏览事件但在 30天 ～
Mail list logo