ager.memory.preallocate.
>
>
>But actually what are trying to fix? Is your node flushing data to disk yet? Or
>has it just not accumulated that much operator state yet?
>
>
>
>Nico
>
>[1] https://ci.apache.org/projects/flink/flink-docs-release-1.3/setup/
>config.html
For got to mention I am running 10 slots in each machine and
taskmanager.network.numberOfBuffers: 1
Is there a scope of improving memory usage?
From: Sathi Chowdhury
<sathi.chowdh...@elliemae.com<mailto:sathi.chowdh...@elliemae.com>>
Date: Monday, May 29, 2017 at 9:55
Hello Flink Dev and Community
I have 5 task managers each tie 64 GB of memory
I am running flink on yarn with task manager heap taskmanager.heap.mb: 563200
Link still shows that it is using about 21 GB memory leaving 35 GB
available..how and what can I do to fix it?
Please suggest
Thanks
Sathi
Hi Till/ flink-devs,
I am trying to understand why adding slots in the task manager is having no
impact in performance for the test pipeline.
Here is my flink-conf.yaml
jobmanager.rpc.address: localhost
jobmanager.rpc.port: 6123
jobmanager.heap.mb: 1024
taskmanager.memory.preallocate: false
st happens that with parallelism 1,
the `markAsTemporarilyIdle` method is not called in the Kinesis connector,
hence the exception did not pop up.
Cheers,
Gordon
On 24 May 2017 at 7:27:08 AM, Sathi Chowdhury
(sathi.chowdh...@elliemae.com<mailto:sathi.chowdh...@elliemae.com>) wrote:
Need
Need quick help on this
Trying to run
/bin/flink run -p 6 –c CLASSNAME - /mnt/flink/jarname.jar
everything works without –p option ,but runs with parallelization 1, which is
what I am trying to get past.
java.lang.NoSuchMethodError:
Hi Till, thanks for your reply.I have to try out my fatjar not including Hadoop
classes as well.
From: Till Rohrmann <trohrm...@apache.org>
Date: Tuesday, May 23, 2017 at 7:12 AM
To: Ted Yu <yuzhih...@gmail.com>
Cc: Sathi Chowdhury <sathi.chowdh...@elliemae.com>, user <
We are running flink 1.2 in pre production
I am trying to test checkpoint stored in external location in s3
I have set these below in flink-conf.yaml
state.backend: filesystem
state.checkpoints.dir: s3://abc-checkpoint
state.backend.fs.checkpointdir: s3://abc-checkpoint
I get this failure in
I was able to bypass that one ..by running it from bin/flink …
Now encountering
by: java.lang.NullPointerException: Checkpoint properties say that the
checkpoint should have been persisted, but missing external path.
From: Sathi Chowdhury <sathi.chowdh...@elliemae.com>
Date: Monday, May 22
Hi Flink Dev,
I am running flink on yarn from EMR and I was running this command to test an
external savepoint
flink savepoint 8c4c885c5899544de556c5caa984502d /mnt
The program finished with the following exception:
org.apache.flink.runtime.leaderretrieval.LeaderRetrievalException: Could not
by an outside service then obviously no issues..
From: Alex Reid <alex.james.r...@gmail.com>
Date: Tuesday, April 25, 2017 at 4:31 PM
To: Sathi Chowdhury <sathi.chowdh...@elliemae.com>
Cc: "user@flink.apache.org" <user@flink.apache.org>
Subject: Re: put record to kin
From: Sathi Chowdhury <sathi.chowdh...@elliemae.com>
Date: Tuesday, April 25, 2017 at 3:56 PM
To: "Tzu-Li (Gordon) Tai" <tzuli...@apache.org>, "user@flink.apache.org"
<user@flink.apache.org>
Subject: Re: put record to kinesis and then trying consume using flink
ted into is “mystream”.
However,
DataStream outputStream = see.addSource(new
FlinkKinesisConsumer<>("myStream", new MyDeserializerSchema(), consumerConfig));
you seem to be consuming from “myStream”.
Could the capital “S” be the issue?
Cheers,
Gordon
On 24 April 201
Hi Flink Dev,
I thought something will work easily with flink and it is simple enough ,yer I
am struggling to make it work.
I am using flink kinesis connector ..using 1.3-SNAPSHOT version.
Basically I am trying to bootstrap a stream with one event pushed into it as a
warmup inside flink job’s
)
at com.esotericsoftware.kryo.io.Output.flush(Output.java:163)
... 17 more
Thanks
From: Sathi Chowdhury <sathi.chowdh...@elliemae.com>
Date: Friday, April 14, 2017 at 1:29 AM
To: "user@flink.apache.org" <user@flink.apache.org>
Subject: Re: Flink errors out and job fails--IOException from Co
et.Socket.(Socket.java:244)
at
org.apache.flink.contrib.streaming.CollectSink.open(CollectSink.java:75)
... 6 more
From: Ted Yu <yuzhih...@gmail.com>
Date: Thursday, April 13, 2017 at 6:01 PM
To: Sathi Chowdhury <sathi.chowdh...@elliemae.com>
Cc: "user@flink.apache.o
)
(8a7301a437cb2d052208ee42c994104b).
From: Sathi Chowdhury <sathi.chowdh...@elliemae.com>
Date: Thursday, April 13, 2017 at 5:44 PM
To: Ted Yu <yuzhih...@gmail.com>
Cc: "user@flink.apache.org" <user@flink.apache.org>
Subject: Re: Flink errors out and job fails--IOException fro
.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:343)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:655)
at java.lang.Thread.run(Thread.java:745)
Thanks
Sathi
From: Ted Yu <yuzhih...@gmail.com>
Date: Thursday, April 13, 20
Has some one encountered this error …as I am using DataStream api to read from
a kinesis stream .This happens intermittently and flink job dies.
reamShard{streamName='dev-ingest-kinesis-us-west-2', shard='{ShardId:
shardId-0009,HashKeyRange: {StartingHashKey:
In order to use latest kibana,I wanted to know if I can use elasticsearch 5.x
and inte4gratge it with flink-connector-elasticsearch_2.10 version
1.3-SNAPSHOT?
Thanks
Sathi
=Notice to Recipient: This e-mail transmission, and any documents,
files or previous e-mail messages
Hi fellow Flink enthusiasts,
I am trying figure out a recommended way to read s3 data while I am trying to
write to s3 using BucketingSink.
BucketingSink s3Sink = new BucketingSink("s3://" + entityBucket
+ "/")
.setBucketer(new EntityCustomBucketer())
.setWriter(new
es not
take part in Flink's checkpointing mechanism. Unfortunately, Flink does not
have a streaming JDBC connector, yet.
Cheers,
Till
On Thu, Mar 2, 2017 at 7:21 AM, Sathi Chowdhury
<sathi.chowdh...@elliemae.com<mailto:sathi.chowdh...@elliemae.com>> wrote:
Hi All,
Is there any
in mind?
Cheers,
Gordon
On February 21, 2017 at 11:54:21 AM, Sathi Chowdhury
(sathi.chowdh...@elliemae.com<mailto:sathi.chowdh...@elliemae.com>) wrote:
Hi flink users and experts,
In my flink processor I am trying to use Flink Kinesis connector . I read from
a kinesis
I get the NPE from the below code
I am running this from my mac in a local flink cluster.
RollingSink s3Sink = new RollingSink("s3://sc-sink1/");
s3Sink.setBucketer(new DateTimeBucketer("-MM-dd--HHmm"));
s3Sink.setWriter(new StringWriter());
s3Sink.setBatchSize(200);
Hi All,
Is there any preferred way to manage multiple jdbc connections from flink..? I
am new to flink and looking for some guidance around the right pattern and apis
to do this. The usecase needs to route a stream to a particular jdbc connection
depending on a field value.So the records are
Thanks Sunil, this is also a question I wanted to ask to the forum. How to
separate application log (aggregated to a file ) just similar to yarn log
aggregation but capture flink process related logs from all workers and get it
in one central location.
I would love to hear suggestion and
Hi flink users and experts,
In my flink processor I am trying to use Flink Kinesis connector . I read from
a kinesis stream , and After the transformation (for which I use
RichCoFlatMapFunction), json event needs to sink to a kinesis stream k1.
DataStream myStream = see.addSource(new
job.
If that is not feasible for you, then you can also write your own custom source
function which queries the REST endpoint and whenever it receives new data it
will send the data to its downstream operators.
Cheers,
Till
On Thu, Feb 2, 2017 at 10:27 PM, Sathi Chowdhury
<sathi.c
It’s good to be here. I have a data stream coming from kinesis. I also have a
list of hashmap which holds metadata that needs to participate in the
processing.
In my flink processor class I construct this metadata (hardcoded)
public static void main(String[] args) throws Exception {
…….//
29 matches
Mail list logo