Hi Igal, thanks for your help.
If I understood correctly, the flink deployments (not the functions) needs
to use the same image right? Which means that the flink master and all
workers still needs to use the same image which includes the module.yaml and
the jar with embedded modules of the full pro
Hi David,
I want to clarify two things firstly based on the info you provided below.
1. If all the tasks are running on the same TaskManager, it would be no
credit-based flow control. The downstream operator consumes the upstream's data
in memory directly, no need network shuffle.
2. If the Tas
Sorry for missing the document link [1]
https://ci.apache.org/projects/flink/flink-docs-release-1.10/monitoring/back_pressure.html
--
From:Zhijiang
Send Time:2020年6月11日(星期四) 11:32
To:Steven Nelson ; user
Subject:Re: Flink 1.10 mem
Regarding the monitor of backpressure, you can refer to the document [1].
As for debugging the backpressure, one option is to trace the jstack of
respective window task thread which causes the backpressure(almost has the
maximum inqueue buffers).
After frequent tracing the jstack, you might find
Hi Averell,
Thanks for trying the native K8s integration. All your issues are due to
high availability
not configured. If you start a HA Flink cluster, like following, then when
JobManager/TaskManager
terminated exceptionally, all the jobs could recover and restore from the
latest checkpoint.
Even
Hi, John,
AFAIK, Flink will automatically help you to ship the "plugins/"
directory of your Flink distribution to Yarn[1]. So, you just need to
make a directory in "plugins/" and put your custom jar into it. Do you
meet any problem with this approach?
[1]
https://github.com/apache/flink/blob/216
Hi Lorenzo,
Looking at the stack trace, the issue is that copying a record uses the
serializer directly. So, you need to enableObjectReuse() [1] to avoid that.
Make sure that you are not modifying/caching data after emitting it in your
pipeline (except Flink managed state).
Then, it should be pos
Hi Arti,
microbenchmarks for AsyncIO are available [1] and the results shown in [2].
So you can roughly expect 1.6k records/ms per core to be the upper limit
without any actual I/O. That range should hold for Flink 1.10 and coming
Flink 1.11. I cannot say much about older versions and you didn't s
Hi Arvid,
I confirm in the case 3) the problem is AvroSerializer.
How can I use a different serializer with AvroFileFormat?
I would be happy to make the file ingestion working and immediately after
mapping to an hand-written POJO, to avoid any inefficiency or headache with
moving around GenericR
We are working with a process and having some problems with backpressure.
The backpressure seems to be caused by a simple Window operation, which
causes our checkpoints to fail.
What would be the recommendations for debugging the backpressure?
Hi Yang
I’m using DEBUG level; do you know what to search for to see kubernetes-client
K8s apiserver address? I don’t see anything useful so far.
Best
kevin
On 2020/06/08 16:02:07, "Bohinski, Kevin"
mailto:k...@comcast.com>> wrote:
> Hi Yang>
>
>
>
> Thanks again for your help so far.>
>
> I t
As Flink Async IO operator is designed for external API or DB calls, are
there any specific guidelines / tips for scaling up this operator?
Particularly for use-cases where incoming events are being ingested at a
very high-speed and the Async IO operator with orderedWait mode can not
keep up with t
Hi! I'm assuming this question comes up regularly with AWS EMR. I posted
it on Stack Overflow.
https://stackoverflow.com/questions/62309400/how-to-safely-update-jobs-in-flight-using-apache-flink-on-aws-emr
I was not able to find instructions for
Hey Everyone,
I am having an issue with how to test a flink job as a whole (rather than
its functions) when there is more than one sink. It seems CollectSink()
will work only for when there is a single instance .addSink method.
Has anyone implemented this successfully? Or is it not possible?
Hi Lorenzo,
1) I'm surprised that this doesn't work. I'd like to see that stacktrace.
2) cannot work like this, because we bundle Avro 1.8.2. You could retest
with dateTimeLogicalType='Joda' set, but then you will probably see the
same issue as 1)
3) I'm surprised that this doesn't work either.
Hello,
I have a custom filesystem that I am trying to migrate to the plugins model
described here:
https://ci.apache.org/projects/flink/flink-docs-stable/ops/filesystems/#adding-a-new-pluggable-file-system-implementation,
but it is unclear to me how to dynamically get the plugins directory to be
a
Thanks Timo,
the stacktrace with 1.9.2-generated specific file is the following
org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException:
Could not forward element to next operator
at
org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(O
Hi Lorenzo,
as far as I know we don't support Avro's logical times in Flink's
AvroInputFormat yet. E.g. AvroRowDeserializationSchema [1] supports the
1.8.2 version of logical types but might be incompatible with 1.9.2.
Reg 2) Specific record generated with AVRO 1.9.2 plugin:
Could you send u
Hi Annemarie,
if TTL is what you are looking for and queryable state is what limits
you, it might make sense to come up with a custom implementation of
queryable state? TTL might be more difficult to implement. As far as I
know this feature is more of an experimental feature without any
consi
Hi,
I need to continuously ingest AVRO files as they arrive.
Files are written by an S3 Sink Kafka Connect but S3 is not the point here.
I started trying to ingest a static bunch of files from local fs first and
I am having weird issues with AVRO deserialization.
I have to say, the records contai
We're currently using this template:
https://github.com/docker-flink/examples/tree/master/helm/flink for running
kubernetes flink for running a job specific cluster ( with a nit of specifying
the class as the main runner for the cluster ).
How would I go about setting up adding savepoints, so
You cannot add custom metric types, just implementations of the existing
ones. Your timer(wrapper) will have to implement Gauge or Histogram.
On 10/06/2020 14:17, Vinay Patil wrote:
Hi,
As timer metric is not provided out of the box, can I create a new
MetricGroup by implementing this interfa
Hi,
As timer metric is not provided out of the box, can I create a new
MetricGroup by implementing this interface and add timer capability, this
will be similar to Histogram wrapper Flink has provided. If yes, I can
create a wrapper like
`public TimerWrapper implements Timer` , in this case will
Hi,
I'm running some jobs using native Kubernetes. Sometimes, for some unrelated
issue with our K8s cluster (e.g: K8s node crashed), my Flink pods are gone.
The JM pod, as it is deployed using a deployment, will be re-created
automatically. However, all of my jobs are lost.
What I have to do now ar
Hi,
I think this issue can be closed, because a probable reason have been found. It
seems that
https://github.com/apache/flink/blob/release-1.6.1-rc1/flink-yarn/src/main/java/org/apache/flink/yarn/entrypoint/YarnJobClusterEntrypoint.java
(https://link.getmailspring.com/link/e79c8b22-d332-47da-
Hi
I'm struggling with updating a flink application. While developing, I was using
a dev application configuration (application.conf). While updating
application.conf, and rerunning job in a production mode, I caught an exception
that stated, that i'm having a dev application configuration. Loo
Hi Kostas,
I'll try it by copying this class to my project for now, and wait for
release. I'm not expecting to finish my migration by then ;) Have a nice day
and thanks for updade - I'll keep this thread opened in case I encounter any
problems. Thanks
--
Sent from: http://apache-flink-user-mai
Hi Alan,
Unfortunately not but the release is expected to come out in the next
couple of weeks, so then it will be available.
Until then, you can either copy the code of the AvroWriterFactory to
your project and use it from there, or download the project from
github, as you said.
Cheers,
Kostas
Hi Kostas,
is this release available in maven central or should I download project from
github?
Thanks,
Alan
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
29 matches
Mail list logo