We've seen similar issue in our production, you can refer to this JIRA (
https://issues.apache.org/jira/browse/FLINK-10848) for more detail.
Shuyi
On Sun, Dec 9, 2018 at 11:27 PM sohimankotia wrote:
> Hi ,
>
> While running Flink streaming job it is requesting more than specified
> resources
Hi All,
Is there a way to send hints to the job graph builder!? Like
specifically disabling or enabling chaining.
Hi ,
While running Flink streaming job it is requesting more than specified
resources from yarn. I am giving 17 TM but it is requesting more than > 35
containers from yarn .
This is happening for all versions greater than 1.4.0.
Attaching JM logs.
logs.zip
Hi,
Flink is requesting more than specified containers from yarn . I am using 17
TM and 3 Slots but in starting it is acquiring > 35 TM and then releasing
them after sometime .
I have attached JM debug logs . Not sure what could be the issue ?
logs.zip
We are trying to setup a single node Kubernetes cluster.
1 Job Manager and 1 Task Manager.
Before we were getting an error, and we followed this thread.
http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-1-5-4-issues-w-TaskManager-connecting-to-ResourceManager-td23298.html
Hi Generic Flink Developer,
Normally when you get an internal error from AWS, you also get a 500 status
code - the 200 seems odd to me.
One thing I do know is that if you’re hitting S3 hard, you have to expect and
recover from errors.
E.g. distcpy jobs in Hadoop-land will auto-retry a failed
Hi, is there any idea on what causes this and how it can be resolved? Thanks.
‐‐‐ Original Message ‐‐‐
On Wednesday, December 5, 2018 12:44 AM, Flink Developer
wrote:
> I have a Flink app with high parallelism (400) running in AWS EMR. It uses
> Flink v1.5.2. It sources Kafka and
Hello everyone!
In our planned setup we have 2 data centers, each in different geographic zone
(and third for ZK as tie breaker). We use HA with ZooKeeper, as follows:
Normally, DC1 will run our job:
DC1
DC2
DC3
Machine 1
Machine 2
Machine 3
Machine 4
Machine 5
ZK1
ZK2
ZK3
ZK4
ZK5
hi,
1. I took a closer look at the relevant code about
RocksDBIncrementalRestoreOperation::restoreInstanceDirectoryFromPath. And I
did some verification. I found this problem is likely related to file
system connection restrictions. At first I was worried that my hdfs would
be overloaded due to a
Hi,
I am trying to read from kafka and write to parquet. But I am getting
thousands of ".part-0-0in progress..." files (and counting ...)
is that a bug or am I doing something wrong?
object StreamParquet extends App {
implicit val env: StreamExecutionEnvironment =
Hi All,
I see the rescale api allow us to somehow redistribute element locally, but is
it possible to make the upstream operator distributed evenly on task managers?
For example I have 10 task managers each with 10 slots. The application reads
data from Kafka topic with 20 partitions, then
Hi Jorn,
There are no more logs . Attaching yarn aggregated logs for first problem .
For second one job is not even getting submitted.
- Sohi
On Sun, Dec 9, 2018 at 2:13 PM Jörn Franke wrote:
> Can you check the Flink log files? You should get there a better
> description of the error.
>
> >
Can you check the Flink log files? You should get there a better description of
the error.
> Am 08.12.2018 um 18:15 schrieb sohimankotia :
>
> Hi ,
>
> I have installed flink-1.7.0 Hadoop 2.7 scala 2.11 . We are using
> hortonworks hadoop distribution.(hdp/2.6.1.0-129/)
>
> *Flink lib folder
13 matches
Mail list logo