Hi Anton,
First of all, there is this PR
https://github.com/apache/flink/pull/9581 that may be interesting to
you.
Second, I think you have to keep in mind that the hourly bucket
reporting will be per-subtask. So if you have parallelism of 4, each
of the 4 tasks will report individually that they
Hi everyone,
Recently we decided to upgrade from flink 1.7.2 to 1.8.1. After an upgrade
our task managers started to fail with SIGSEGV error from time to time.
In process of adjusting the code to 1.8.1, we noticed that there were some
changes around TypeSerializerSnapshot interface and its implem
I managed to fix it however ran into another problem that I could
appreciate help in resolving.
it turns out that the username for all three nodes was different. having
the same username for them fixed the issue. i.e
same_username@slave-node2-hostname
same_username@slave-node3-hostname
same_userna
Thanks Till, I will continue to follow this issue and see what we can do.
Best regards,
Anyang
Till Rohrmann 于2019年9月11日周三 下午5:12写道:
> Suggestion 1 makes sense. For the quick termination I think we need to
> think a bit more about it to find a good solution also to support strict
> SLA requirem
Hi Anyang and Till,
I think we agreed on making the interval configurable in this case. Let me
revise the current PR. You can review it after that.
Best Regards
Peter Huang
On Thu, Sep 12, 2019 at 12:53 AM Anyang Hu wrote:
> Thanks Till, I will continue to follow this issue and see what we c
Hi Fabian,
Thank’s for sharing your thought’s. I’ll give it a try.
Best regards
Theo
From: Fabian Hueske
Sent: Mittwoch, 11. September 2019 09:55
To: theo.diefent...@scoop-software.de
Cc: user
Subject: Re: Filter events based on future events
Hi Theo,
I would imple
Hi all,
I'm trying to add authentication to the web dashboard using `nginx`. Flink's
`rest.port` is set to `8081`, connection to this port is disabled by firewall.
I'm using `nginx` to listen to requests on port 8080 and redirect to port 8081
them with username/password authentication (Port 808
Hi everyone,
I wanted to reach out to you and ask how many of you are using a customized
RestartStrategy[1] in production jobs.
We are currently developing the new Flink scheduler[2] which interacts
with restart strategies in a different way. We have to re-design the
interfaces for the new restar
Hi Zhu,
We are using custom restart strategy like this:
environment.setRestartStrategy(failureRateRestart(2, Time.minutes(1),
Time.minutes(10)));
---
Oytun Tez
*M O T A W O R D*
The World's Fastest Human Translation Platform.
oy...@motaword.com — www.motaword.com
On Thu, Sep 12, 2019 at 7:11
Thanks Oytun for the reply!
Sorry for not have stated it clearly. When saying "customized
RestartStrategy", we mean that users implement an
*org.apache.flink.runtime.executiongraph.restart.RestartStrategy* by
themselves and use it by configuring like "restart-strategy:
org.foobar.MyRestartStrategy
Just for a Kafka source:
https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/kafka.html#the-deserializationschema
- There is also a version of this schema available that can lookup the
writer’s schema (schema which was used to write the record) in Confluent
Schema Regi
Turns out there was some other deserialization problem unrelated to this.
On Mon, Sep 9, 2019 at 11:15 AM Catlyn Kong wrote:
> Hi fellow streamers,
>
> I'm trying to support avro BYTES type in my flink application. Since
> ByteBuffer isn't a supported type, I'm converting the field to an
> Array
Hi Elias
Thanks for letting me know. I have found it but we also need the option to
register Avro Schema’s and use the registry when we write to Kafka. So we will
create a serialisation version and when it works implement it into Flink and
create a pull request for the community.
Med venlig h
I have a standalone cluster. I have added my own library(jar file) to the
lib/ folder in flink . I submit my job from cli after I start the cluster.
Now I want to externalize a property file which has to be read by this
library. Since this library is loaded by flink's classloader and not the
applic
Thanks Gary,
I am compiling a new version of Mesos and when I test it again I will reply
here if I found an error.
On Wed, 11 Sep 2019, 09:22 Gary Yao, wrote:
> Hi Felipe,
>
> I am glad that you were able to fix the problem yourself.
>
> > But I suppose that Mesos will allocate Slots and Task
Thanks a lot everyone for the warm welcome. Happy Mid-autumn Festival!
Best,
tison.
Leonard Xu 于2019年9月12日周四 上午11:05写道:
> Congratulations Zili Chen ! !
>
> Best,
> Leonard Xu
> > On 2019年9月12日, at 上午11:02, Yun Tang wrote:
> >
> > Congratulations Zili
> >
> > Best
> > Yun Tang
> >
Hi,
I came across an issue during job submission via Flink Cli Client with Flink
1.7.1 in high availability mode.
Setup:
Flink version:: 1.7.1
Cluster:: K8s
Mode:: High availability with 2 jobmanagers
CLI Command
./bin/flink run -d -c MyExample /myexample.jar
The CLI runs inside a K8s job and s
Hi
You can use this way:
Use typesafe configuration, which provides excellent configuration
methodologies.
You supply default configuration, which is read by your application through
reference.conf file of typesafe. If you want to override any of the
defaults you can supply to command line argument
Sorry there is a typo, corrected it:
val pmtool = ParameterTool.fromArgs(args)
val defaultConfig = ConfigFactory.load() //Default config in
reference.conf/application.conf/system properties/env of typesafe
val overrideConfigFromArgs = ConfigFactory.load(pmtool.toMap)
val finalConfig = overrideCo
19 matches
Mail list logo