Hi all,
The external database consists of a set of rules for each key, these rules
should be applied on each stream element in the Flink job. Because it is very
expensive to make a DB call for each element and retrieve the rules, I want to
fetch the rules from the database at initialization
tent znode "/leader/resource_manager_lock" of ZooKeeper.
Best,
Yangze Guo
On Fri, Jan 17, 2020 at 5:11 PM Yang Wang wrote:
>
> Hi Kumar Bolar, Harshith,
>
> Could you please check the jobmanager log to find out what address the
akka is l
Hi all,
We were previously using RHEL for our Flink machines. I'm currently working on
moving them over to Ubuntu. When I start the task manager, it fails to connect
to the job manager with the following message -
2020-01-16 10:54:42,777 INFO
Hi all,
I'm running a standalone Flink cluster with Zookeeper and S3 for high
availability storage. All of a sudden, the job managers started failing with an
S3 `UnrecoverableS3OperationException` error. Here is the full error trace -
```
java.lang.RuntimeException:
Hi all,
I’ve deployed a job on my Flink cluster which has 3 task managers and 2 job
managers.
Like clockwork, every two minutes the job restarts with the following error.
java.io.IOException: Connecting the channel failed: Connecting to remote task
manager +
Hi all,
Prior to upgrading to 1.8, there was one active job manager and when I try to
access the inactive job manager's web UI, the page used to get redirected to
the active job manager. But now there is no redirection happening from the
inactive JM to active JM. Did something change to the
. I added a comment to the (closed)
Jira issue, so this might be fixed in further releases.
Cheers,
Wouter
Op ma 20 mei 2019 om 16:18 schreef Kumar Bolar, Harshith
mailto:hk...@arity.com>>:
Hi Wouter,
I’ve upgraded Flink to 1.8, but now I only see Internal server error on the
dashboar
W9w=61bFb6zUNKZxlAQDRo_jKA=mnxsj0sRDEDo66PWXKz0vcnqyR6N6FFcRT1a8fQQ_tQ=ExTlIDAtlDhFXbfvmWEX7sSHnlu6sLz3SdyB9-1Hv60=>
Op do 16 mei 2019 om 16:05 schreef Kumar Bolar, Harshith
mailto:hk...@arity.com>>:
Hi all,
After upgrading Flink to 1.7.2, when I try to submit a job from the dashboard
and there'
Hi all,
We have a central Flink cluster which will be used by multiple different teams
(Data Science, Engineering etc). Each team has their own user and keytab to
connect to services like Kafka, Cassandra etc. How should the jobs be
configured such that different jobs use different keytabs and
Hi all,
We have recently upgraded from 1.7.2 to 1.8.0. In 1.7.2, when I start a task
manager in Standalone mode, the actor system was being started with a host
name. But after upgrading to 1.8.0, this is getting started with the IP
address. As a result, the task managers on the dashboard show
_tQ=ExTlIDAtlDhFXbfvmWEX7sSHnlu6sLz3SdyB9-1Hv60=>
Op do 16 mei 2019 om 16:05 schreef Kumar Bolar, Harshith
mailto:hk...@arity.com>>:
Hi all,
After upgrading Flink to 1.7.2, when I try to submit a job from the dashboard
and there's some issue with the job, the job submission fails with the
following error.
Hi all,
After upgrading Flink to 1.7.2, when I try to submit a job from the dashboard
and there's some issue with the job, the job submission fails with the
following error.
Exception occurred in REST handler:
org.apache.flink.client.program.ProgramInvocationException: The main method
Hi all,
I had a requirement to handle Kafka producer exceptions so that they don’t
bring down the job. I extended FlinkKafkaProducer010 and handled the exceptions
there.
public void invoke(T value, Context context) throws Exception {
try {
this.checkErroneous();
lution on this.
Sincerely,
Andrea
Il giorno lun 18 mar 2019 alle ore 11:53 Kumar Bolar, Harshith
mailto:hk...@arity.com>> ha scritto:
Hi all,
We're running a Flink on a five node standalone cluster with three task manager
(TM1, TM2, TM3) and two job managers.
Whenever I submit a new job, the job g
Hi all,
We're running a Flink on a five node standalone cluster with three task manager
(TM1, TM2, TM3) and two job managers.
Whenever I submit a new job, the job gets deployed on only TM3. When the number
of slots in TM3 get exhausted, the jobs start getting deployed on TM2 and so
on. How do
a64bcb82c4e8ddf03a2ed38fe8edafa98=DwMFaQ=gtIjdLs6LnStUpy9cTOW9w=61bFb6zUNKZxlAQDRo_jKA=Jy1XfZGQ94_0D40szBxJ7iD8exZY1SMaTAa0fozsFrM=OMtNVCMgKGinpOdJIzJFpN7jTHfYdG__HGAi89iFr7Y=>
On Fri, Mar 15, 2019 at 3:36 AM Kumar Bolar, Harshith
mailto:hk...@arity.com>> wrote:
Hi Gary,
An update. I notic
ase the "cluster" hostname is advertised which hints a
problem in your Flink configuration.
Best,
Gary
On Thu, Mar 14, 2019 at 4:54 PM Kumar Bolar, Harshith
mailto:hk...@arity.com>> wrote:
Hi Gary,
I’ve attached the relevant portions of the JM and TM logs.
Job M
logged.
The TaskManager tries to connect to the leader that is advertised in
ZooKeeper. In your case the "cluster" hostname is advertised which hints a
problem in your Flink configuration.
Best,
Gary
On Thu, Mar 14, 2019 at 4:54 PM Kumar Bolar, Harshith
mailto:hk...@arity.com>> wrote:
H
March 2019 at 9:06 PM
To: Harshith Kumar Bolar
Cc: user
Subject: [External] Re: Flink 1.7.2: Task Manager not able to connect to Job
Manager
Hi Harshith,
Can you share JM and TM logs?
Best,
Gary
On Thu, Mar 14, 2019 at 3:42 PM Kumar Bolar, Harshith
mailto:hk...@arity.com>> wrote:
Hi all
Hi all,
I'm trying to upgrade our Flink cluster from 1.4.2 to 1.7.2
When I bring up the cluster, the task managers refuse to connect to the job
managers with the following error.
2019-03-14 10:34:41,551 WARN akka.remote.ReliableDeliverySupervisor
- Association with remote
this is one way. Another way could be to look into the logs of the running
TaskManagers. They should contain the path of the blob store directory.
Cheers,
Till
On Thu, Feb 28, 2019 at 12:04 PM Kumar Bolar, Harshith
mailto:hk...@arity.com>> wrote:
Is there any way to figure out which one is bei
ively supported, I would suggest to upgrade to the latest Flink
version and to check whether the problem still occurs.
Cheers,
Till
On Tue, Feb 26, 2019 at 2:48 AM Kumar Bolar, Harshith
mailto:hk...@arity.com>> wrote:
Hi all,
We're running Flink on a standalone five node cluster. The
On Tue, Feb 26, 2019 at 2:48 AM Kumar Bolar, Harshith
mailto:hk...@arity.com>> wrote:
Hi all,
We're running Flink on a standalone five node cluster. The /tmp/ directory
keeps filling with directories starting with blobstore--*. These directories
are very large (approx 1 GB) and fill up the spac
Hi all,
We're running Flink on a standalone five node cluster. The /tmp/ directory
keeps filling with directories starting with blobstore--*. These directories
are very large (approx 1 GB) and fill up the space very quickly and the jobs
fail with a No space left of device error. The files in
of
the window, because by definition there is no late data and state does not need
to be kept around.
On Thu, Feb 14, 2019 at 1:03 PM Kumar Bolar, Harshith
mailto:hk...@arity.com>> wrote:
Thanks Konstanin,
But I’m using processing time, hence no watermarks. Will the state still be
cleared automat
Hi all,
My application uses a keyed window that is keyed by a function of timestamp.
This means once that particular window has been fired and processed, there is
no use in keeping that key active because there is no way that particular key
will appear again. Because this use case involves
Typo: lines 1, 2 and 5
From: Harshith Kumar Bolar
Date: Tuesday, 29 January 2019 at 10:16 AM
To: "user@flink.apache.org"
Subject: KeyBy is not creating different keyed streams for different keys
Hi all,
I'm reading a simple JSON string as input and keying the stream based on two
fields A and
Hi all,
I'm reading a simple JSON string as input and keying the stream based on two
fields A and B. But KeyBy is generating the same keyed stream for different
values of B but for a particular combination of A and B.
The input:
{
"A": "352580084349898",
"B": "1546559127",
"C":
or a map before
you window operator, put the System.currentTimeMillis in the record itself, and
the use the evictor and the currentProcessing time to clean up.
I hope this helps,
Kostas
On Fri, Jan 18, 2019 at 9:25 AM Kumar Bolar, Harshith
mailto:hk...@arity.com>> wrote:
Hi all,
I'
b recovers.
Best,
Fabian
Am Fr., 18. Jan. 2019 um 10:53 Uhr schrieb Kumar Bolar, Harshith
mailto:hk...@arity.com>>:
Hi all,
We're running a standalone Flink cluster with 2 Job Managers and 3 Task
Managers. Whenever a TM crashes, we simply restart that particular TM and
proceed wit
Hi all,
We're running a standalone Flink cluster with 2 Job Managers and 3 Task
Managers. Whenever a TM crashes, we simply restart that particular TM and
proceed with the processing.
But reading the comments on
Hi all,
I'm using Global Windows for my application with a custom trigger and custom
evictor based on some conditions. Now, I also want to evict those elements from
the window that have stayed there for too long, let's say 30 mins. How would I
go about doing this? Is there a utility that Flink
Hi all,
We run Flink on a five node cluster – three task managers, two job managers.
One of the job manager running on flink2-0 node is down and refuses to come
back up, so the cluster is currently running with a single job manager. When I
restart the service, I see this in the logs. Any idea
anks,
Amit
On Mon, Oct 15, 2018 at 3:15 PM Kumar Bolar, Harshith
mailto:hk...@arity.com>> wrote:
Hi all,
We store Flink checkpoints in Amazon S3. Flink periodically sends out GET, PUT,
LIST, DELETE requests to S3, to store-clear checkpoints. From the logs, we see
that GET, PUT and LIST request
Hi all,
We store Flink checkpoints in Amazon S3. Flink periodically sends out GET, PUT,
LIST, DELETE requests to S3, to store-clear checkpoints. From the logs, we see
that GET, PUT and LIST requests are successful but it throws an AWS access
denied error for DELETE request.
Here’s a snippet
Hi folks,
We're currently deploying our Flink applications as a fat-jar using the
maven-shade-plugin. Problem is, each application jar ends up being
approximately 130-140 MB which is a pain to build and deploy every time. Is
there a way to exclude dependencies and just deploy a thin jar to the
36 matches
Mail list logo