he State
> >Processor API, but it's just that up to this point, we didn't have a plan
> >for that yet.
> >Can you open a JIRA for this? I think it'll be a reasonable extension to
> >the API.
> >
> >
> >>
> >> And when I change `xxx.keyBy(_._
the difference between the watermark and my element's timestamp is
> greater than X - drop the element.
>
> However, I do not have access to the current watermark inside any of
> Flink's operators/functions including FilterFunction.
>
> How can such functionality be achieved?
Hi Adam,
sorry for the late reply. Introducing a global state is something that
should be avoided as it introduces bottlenecks and/or concurrency/order
issues. Broadcasting the state between different subtasks will also bring a
loss in performance since each state change has to be shared with
Hi Yuval,
thanks for bringing this issue up. You're right: There is no error handling
currently implemented for SerializationSchema. FLIP-124 [1] addressed this
for the DeserializationSchema, though. I created FLINK-19397 [2] to cover
this feature.
In the meantime, I cannot think of any other
Hi Ruben,
thanks for reaching out to us. Flink's native Kubernetes Application mode
[1] might be what you're looking for.
Best,
Matthias
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/native_kubernetes.html#flink-kubernetes-application
On Wed, Oct 28, 2020 at
I missed adding the mailing list in my previous email.
-- Forwarded message -
From: Matthias Pohl
Date: Tue, Oct 27, 2020 at 12:39 PM
Subject: Re: Flink memory usage monitoring
To: Rajesh Payyappilly Jose
Hi Rajesh,
thanks for reaching out to us. We worked on providing metrics
Hi Flavio,
others might have better ideas to solve this but I'll give it a try: Have
you considered extending FileOutputFormat to achieve what you need? That
approach (which is discussed in [1]) sounds like something you could do.
Another pointer I want to give is the DefaultRollingPolicy [2]. It
can go about doing this?
>
>
--
Matthias Pohl | Engineer
Follow us @VervericaData Ververica <https://www.ververica.com/>
--
Join Flink Forward <https://flink-forward.org/> - The Apache Flink
Conference
Stream Processing | Event Driven | Real Time
--
Ververica GmbH | Inva
Hi Hanan,
thanks for reaching out to the Flink community. Have you considered
changing io.tmp.dirs [1][2]?
Best,
Matthias
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.9/ops/config.html#io-tmp-dirs
[2]
Hi Avi,
thanks for reaching out to the Flink community. I haven't worked with the
KinesisConsumer. Unfortenately, I cannot judge whether there's something
missing in your setup. But first of all: Could you confirm that the key
itself is valid? Did you try to use it in other cases?
Best,
Matthias
0, 2020 at 6:53 PM Matthias Pohl
> wrote:
>
>> Hi Avi,
>> thanks for reaching out to the Flink community. I haven't worked with the
>> KinesisConsumer. Unfortenately, I cannot judge whether there's something
>> missing in your setup. But first of all: Could you confirm tha
Hi Robert,
there is a discussion about it in FLINK-20632 [1]. PR #9249 [2] still needs
to get reviewed. You might want to follow that PR as Xintong suggested in
[1].
I hope that helps.
Best,
Matthias
[1] https://issues.apache.org/jira/browse/FLINK-20632
[2]
Hi Iacovos,
The task's off-heap configuration value is used when spinning up
TaskManager containers in a clustered environment. It will contribute to
the overall memory reserved for a TaskManager container during deployment.
This parameter can be used to influence the amount of memory allocated if
Hi Jiahui,
thanks for reaching out to the mailing list. This is not something I have
expertise in. But have you checked out the Flink SSL Setup documentation
[1]? Maybe, you'd find some help there.
Additionally, I did go through the code a bit: A SecurityContext is loaded
during ClusterEntrypoint
s used when Flink uses HybridMemorySegments. Well, how the Flink knows
> when to use these HybridMemorySegments and in which operations this is
> happened?
>
> Best,
> Iacovos
> On 11/11/20 11:41 π.μ., Matthias Pohl wrote:
>
> Hi Iacovos,
> The task's off-heap configuration value
Hi Averell,
thanks for sharing this with the Flink community. Is there anything
suspicious in the logs which you could share?
Best,
Matthias
On Fri, Nov 13, 2020 at 2:27 AM Averell wrote:
> I have some updates. Some weird behaviours were found. Please refer to the
> attached photo.
>
> All
Hi 键,
we would need more context on your case (e.g. logs and more details on what
you're doing exactly or any other useful information) to help.
Best,
Matthias
On Thu, Nov 12, 2020 at 3:25 PM 键 <1941890...@qq.com> wrote:
> Data loss exception using hash join in batch mode
>
Hi Flavio,
thanks for sharing this with the Flink community. Could you answer the
following questions, please:
- What's the code of your Job's main method?
- What cluster backend and application do you use to execute the job?
- Is there anything suspicious you can find in the logs that might be
Hi Rex,
after verifying with Timo I created a new issue to address your proposal of
introducing a new operator [1]. Feel free to work on that one if you like.
Best,
Matthias
[1] https://issues.apache.org/jira/browse/FLINK-20148
On Thu, Nov 5, 2020 at 6:35 PM Rex Fenley wrote:
> Thanks Timo,
>
Hi Si-li,
trying to answer your initial question: Theoretically, you could try using
the co-location constraints to achieve this. But keep in mind that this
might lead to multiple Join operators running in the same JVM reducing the
amount of memory each operator can utilize.
Best,
Matthias
On
Hi Fuyao,
for your first question about the different behavior depending on whether
you chain the methods or not: Keep in mind that you have to save the return
value of the assignTimestampsAndWatermarks method call if you don't chain
the methods together as it is also shown in [1].
At least the
.withMaxPartSize(1024 * 1024 * 1024)
> .build())
> .build();
>
> //input.print();
> input.addSink(sink);
>
>
> Not sure what else to try. Any pointers appreciated.
>
>
>
> Sent with ProtonMail <https://protonmail.com> Secure Email.
>
>
--
Matthias Poh
Hi Joseph,
thanks for reaching out to us. There shouldn't be any downsides other than
the one you already mentioned as far as I know.
Best,
Matthias
On Fri, Oct 23, 2020 at 1:27 PM Joseph Lorenzini
wrote:
> Hi all,
>
>
>
> I plan to run flink jobs as docker containers in a AWS Elastic
Hello Edward,
please find my answers within your message below:
On Wed, Nov 4, 2020 at 1:35 PM Colletta, Edward
wrote:
> Using Flink 1.9.2 with FsStateBackend, Session cluster.
>
>
>
>1. Does heap state get cleaned up when a job is cancelled?
>
> We have jobs that we run on a daily basis.
Hi Alexey,
thanks for reaching out to the Flink community. I'm not 100% sure whether
you have an actual issue or whether it's just the changed behavior you are
confused about. The change you're describing was introduced in Flink 1.12
as part of the work on FLIP-104 [1] exposing the actual memory
Hi Marco,
Could you share the preconfiguration logs? They are printed in the
beginning of the taskmanagers' logs and contain a summary of the used
memory configuration?
Best,
Matthias
On Tue, Jan 26, 2021 at 6:35 AM Marco Villalobos
wrote:
>
> I have a flink job that collects and aggregates
Hi Marco,
have you had a look into the connector documentation ([1] for the regular
connector or [2] for the SQL connector)? Maybe, discussions about
connection pooling in [3] and [4] or the code snippets provided in the
JavaDoc of JdbcInputFormat [5] help as well.
Best,
Matthias
[1]
Hi Wayne,
based on other mailing list discussion ([1]) you can assume that the
combination of FileProcessingMode.PROCESS_CONTINUOUSLY and setting
FileInputFormat.setNestedFileEnumeration to true should work as you expect
it to work.
Can you provide more context on your issue like log files? Which
Hi Sagar,
have you had a look at CoProcessFunction [1]? CoProcessFunction enables you
to join two streams into one and also provide context to use SideOutput [2].
Best,
Matthias
[1]
Hi Sebastián,
have you tried changing the dependency scope to provided
for flink-table-planner-blink as it is suggested in [1]?
Best,
Matthias
[1]
http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Flink-1-10-exception-Unable-to-instantiate-java-compiler-td38221.html
On Fri, Jan 22,
Hi Lu,
thanks for reaching out to the community, Lu. Interesting observation.
There's no change between 1.9.1 and 1.11 that could explain this behavior
as far as I can tell. Have you had a chance to debug the code? Can you
provide the code so that we could look into it more closely?
Another thing:
Hi Smile,
Have you used a clean checkout? I second Robert's statement considering
that the dependency you're talking about is already part
of flink-end-to-end-tests/flink-end-to-end-tests-common-kafka/pom.xml. It
also has the correct scope set both in master and release-1.12.
Best,
Matthias
On
neral.
>
> :-)
>
> Thanks!
>
> On Fri, 22 Jan 2021 at 15:19, Matthias Pohl
> wrote:
>
>> Hi Sebastián,
>> have you tried changing the dependency scope to provided
>> for flink-table-planner-blink as it is suggested in [1]?
>>
>> Best,
>>
errors
> Cannot resolve plugin org.codehaus.mojo:build-helper-maven-plugin:
>
>
> Best,
> penguin
>
>
>
>
> 在 2021-01-13 15:24:22,"Matthias Pohl" 写道:
>
> Hi,
> you might want to move these kinds of questions into the
> user@flink.apache
s mean that only the
> replied person can see the email?
>
>
> If Maven fails to download plugins or dependencies, is mvn -clean
> install -DskipTests a must?
> I'll try first.
>
> penguin
>
>
>
>
> 在 2021-01-13 16:35:10,"Matthias Pohl" 写道:
>
Hi Abhishek,
unsubscribing works by sending an email to user-unsubscr...@flink.apache.org
as stated in [1].
Best,
Matthias
[1] https://flink.apache.org/community.html#mailing-lists
On Sun, Jan 24, 2021 at 3:06 PM Abhishek Jain wrote:
> unsubscribe
>
Hi,
thanks for reaching out to the community. I'm not an Hive nor Orc format
expert. But could it be that this is a configuration problem? The error is
caused by an ArrayIndexOutOfBounds exception in
ValidReadTxnList.readFromString on an array generated by splitting a String
using colons as
luent/kafka-avro-serializer/5.5.2/kafka-avro-serializer-5.5.2.pom
> [3]. https://mvnrepository.com/artifact/io.confluent/kafka-avro-serializer
>
> At 2021-01-22 21:22:51, "Matthias Pohl" wrote:
>
> Hi Smile,
> Have you used a clean checkout? I second Robert's statement consideri
tem property ? If
> user code has nothing to do with such arguments, why Flink append these
> arguments to user JOB args?
> Thanks,
> Alexey
>
>
> --
> *From:* Matthias Pohl
> *Sent:* Sunday, January 17, 2021 11:53:29 PM
> *To:* Alex
Hi,
you might want to move these kinds of questions into the
user@flink.apache.org which is the mailing list for community support
questions [1].
Coming back to your question: Is it just me or is the image not accessible?
Could you provide a textual description of your problem?
Best,
Matthias
Yes, thanks for taking over the release!
Best,
Matthias
On Mon, Feb 1, 2021 at 5:04 AM Zhu Zhu wrote:
> Thanks Xintong for being the release manager and everyone who helped with
> the release!
>
> Cheers,
> Zhu
>
> Dian Fu 于2021年1月29日周五 下午5:56写道:
>
>> Thanks Xintong for driving this release!
Hi Maciek,
my understanding is that the jars in the JobManager should be cleaned up
after the job is terminated (I assume that your jobs successfully
finished). The jars are managed by the BlobService. The dispatcher will
trigger the jobCleanup in [1] after job termination. Are there any
rg.apache.flink.configuration.GlobalConfiguration [] - Loading
> configuration property: jobmanager.memory.process.size, 1600m
> 2021-01-25 21:41:18,046 INFO
> org.apache.flink.configuration.GlobalConfiguration [] - Loading
> configuration property: taskmanager.memory.process.size, 1728m
> 2021-01-25 21:41:18,0
AM Milind Vaidya wrote:
> Sounds good.
> How do I achieve stop with savepoint ?
>
> - Milind
>
> On Mon, Apr 26, 2021 at 12:55 AM Matthias Pohl
> wrote:
>
>> I'm not sure what you're trying to achieve. Are you trying to simulate a
>> task failure? Or are
Sorry, for not getting back earlier. I missed that thread. It looks like
some wrong assumption on our end. Hence, Yangze and Guowei are right. I'm
gonna look into the issue.
Matthias
On Fri, May 14, 2021 at 4:21 AM Guowei Ma wrote:
> Hi, Gary
>
> I think it might be a bug. So would you like to
]
Best,
Matthias
[1] https://issues.apache.org/jira/browse/FLINK-22688
On Wed, May 19, 2021 at 5:45 AM Gary Wu wrote:
> Thanks! I have updated the detail and task manager log in
> https://issues.apache.org/jira/browse/FLINK-22688.
>
> Regards,
> -Gary
>
> On Tue, 18 May 2021 at
Hi Dawid,
+1 (non-binding)
Thanks for driving this release. I checked the following things:
- downloaded and build source code
- verified checksums
- double-checked diff of pom files between 1.13.0 and 1.13.1-rc1
- did a visual check of the release blog post
- started cluster and ran jobs
Hi Vijay,
have you tried yarn-ship-files [1] or yarn-ship-archives [2]? Maybe, that's
what you're looking for...
Best,
Matthias
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/deployment/config/#yarn-ship-files
[2]
Hi 王炳焱,
thanks for reaching out to the Flink community and sorry for the late
reply. Unfortunately, Flink SQL does not support state backwards
compatibility. There is no clear pointer in the documentation that states
that. I created FLINK-22799 [1] to cover this. In the mean time, you could
try
Hi Alexey,
the two implementations are not compatible. You can find information on how
to work around this in the Kafka Connector docs [1].
Best,
Matthias
[1]
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/connectors/datastream/kafka/#upgrading-to-the-latest-connector-version
Hi Sonam,
It looks like it has been stale for some time. You might be able to restart
the discussion replying to the respective thread in the dev mailing list
[1]. You seem to be right about the repository based on Jark's reply in the
related ticket FLINK-15472 [2]. I'm adding Jark to the thread.
Hi Robert,
it would be interesting to see the corresponding taskmanager/jobmanager
logs. That would help in finding out why the savepoint creation failed.
Just to verify: The savepoint data wasn't written to S3 even after the
timeout happened, was it?
Best,
Matthias
On Thu, May 27, 2021 at 7:50
Hi Vijayendra,
Thanks for reaching out to the Flink community. There is no other way I
know of to achieve what you're looking for. The plugins support is provided
through the respective ./plugins/ directory as described in the docs [1].
Best,
Matthias
[1]
Hi Robert,
increasing heap memory usage could be due to some memory leak in the user
code. Have you analyzed a heap dump? About the TM logs you shared. I don't
see anything suspicious there. Nothing about memory problems. Are those the
correct logs?
Best,
Matthias
On Thu, May 27, 2021 at 6:01 PM
Hi Ashutosh,
you can set the metrics update interval
through metrics.fetcher.update-interval [1]. Unfortunately, there is no
single endpoint to collect all the metrics in a more efficient way other
than the metrics endpoints provided in [2].
I hope that helps.
Best,
Matthias
[1]
On Fri, May 28, 2021 at 8:21 AM Matthias Pohl
> wrote:
>
>> Hi Robert,
>> it would be interesting to see the corresponding taskmanager/jobmanager
>> logs. That would help in finding out why the savepoint creation failed.
>> Just to verify: The savepoint data wasn't wri
Hi Enrique,
thanks for reaching out to the community. I'm not 100% sure what problem
you're facing. The log messages you're sharing could mean that the Flink
cluster still behaves as normal having some outages and the HA
functionality kicking in.
The behavior you're seeing with leaders for the
Hi igyu,
I'm not sure whether I can be of much help here because I'm not that
familiar with Kerberos. But the Flink documentation [1] suggests deploying
separate Flink clusters for each keytab. Did you try that?
Best,
Matthias
[1]
Hi Tao,
it looks like it should work considering that you have a sleep of 1 second
before each emission. I'm going to add Roman to this thread. Maybe, he has
sees something obvious which I'm missing.
Could you run the job with the log level set to debug and provide the logs
once more?
ent?
>
> Thomas
>
> On Tue, Jun 29, 2021 at 1:41 AM Matthias Pohl
> wrote:
>
>> Hi Rommel, Hi Thomas,
>> Apache Parquet was bumped from 1.10.0 to 1.11.1 for Flink 1.12 in
>> FLINK-19137 [1]. The error you're seeing looks like some dependency issue
>&g
Hi Kevin,
I haven't worked with Bugsnag. So, I cannot give more input on that one.
For Flink, exceptions are handled by the job's scheduler. Flink collects
these exceptions in some bounded queue called the exception history [1]. It
collects task failures but also global failures which make the job
Hi Rommel, Hi Thomas,
Apache Parquet was bumped from 1.10.0 to 1.11.1 for Flink 1.12 in
FLINK-19137 [1]. The error you're seeing looks like some dependency issue
where you have a version other than 1.11.1
of org.apache.parquet:parquet-column:jar on your classpath?
Matthias
[1]
Hi Sonam,
what's the reason for not using the Flink SQL client? Because of the
version issue? I only know that FlinkSQL's state is not
backwards-compatible between major Flink versions [1]. But that seems to be
unrelated to what you describe.
I'm gonna add Jark and Timo to this thread. Maybe,
gt;
>
>
> On Fri, Apr 23, 2021 at 12:32 AM Matthias Pohl
> wrote:
>
>> One additional question: How did you stop and restart the job? The
>> behavior you're expecting should work with stop-with-savepoint. Cancelling
>> the job and then just restarting it
Hi,
I have a few questions about your case:
* What is the option you're referring to for the bounded shuffle? That
might help to understand what streaming mode solution you're looking for.
* What does the job graph look like? Are you assuming that it's due to a
shuffling operation? Could you
Hi Vishal,
based on the error message and the behavior you described, introducing a
filter for late events is the way to go - just as described in the SO
thread you mentioned. Usually, you would collect late events in some kind
of side output [1].
I hope that helps.
Matthias
[1]
Hi Milind,
I bet someone else might have a faster answer. But could you provide the
logs and config to get a better understanding of what your issue is?
In general, the state is maintained even in cases where a TaskManager fails.
Best,
Matthias
On Thu, Apr 22, 2021 at 5:11 AM Milind Vaidya
I think you're right, Flavio. I created FLINK-22414 to cover this. Thanks
for bringing it up.
Matthias
[1] https://issues.apache.org/jira/browse/FLINK-22414
On Fri, Apr 16, 2021 at 9:32 AM Flavio Pompermaier
wrote:
> Hi Yang,
> isn't this something to fix? If I look at the documentation at
Hi Gil,
I'm not sure whether I understand you correctly. What do you mean by
deploying the job manager as "job" or "deployment"? Are you referring to
the different deployment modes, Flink offers [1]? These would be
independent of Kubernetes. Or do you wonder what the differences are
between the
filter. Can you
>> please validate. That aside I already have the lateness configured for the
>> session window ( the normal withLateNess() ) and this looks like a session
>> window was not collected and still is alive for some reason ( a flink bug ?
>> )
>>
>&g
Hi Aeden,
there are some improvements to time conversions coming up in Flink 1.13.
For now, the best solution to overcome this is to provide a user-defined
function.
Hope, that helps.
Best,
Matthias
On Wed, Apr 21, 2021 at 9:36 PM Aeden Jameson
wrote:
> I've probably overlooked something
Hi Ayush,
Which state backend have you configured [1]? Have you considered trying out
RocksDB [2]? RocksDB might help with persisting at least keyed state.
Best,
Matthias
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/state_backends.html#choose-the-right-state-backend
[2]
Hi Abdullah,
the metadata file contains handles to the operator states of the checkpoint
[1]. You might want to have a look into the State Processor API [2].
Best,
Matthias
[1]
Thanks for setting this up, Konstantin. +1
On Thu, Apr 22, 2021 at 11:16 AM Konstantin Knauf wrote:
> Hi everyone,
>
> all of the Jira Bot rules are live now. Particularly in the beginning the
> Jira Bot will be very active, because the rules apply to a lot of old,
> stale tickets. So, if you
Hi Bhagi,
Thanks for reaching out to the Flink community. The error the UI is showing
is normal during an ongoing leader election. Additionally, the connection
refused warnings seem to be normal according to other mailing list
threads. Are you referring to the UI error as the issue you are facing?
Yes, thanks for managing the release, Dawid & Guowei! +1
On Tue, May 4, 2021 at 4:20 PM Etienne Chauchot
wrote:
> Congrats to everyone involved !
>
> Best
>
> Etienne
> On 03/05/2021 15:38, Dawid Wysakowicz wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
>
t; sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>
> at java.lang.reflect.Method.invoke(Method.java:498)
>
> at
> org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:288)
>
> On Tue, May 4, 2021 at 11:47 AM Rag
Hi Vishal,
Do I understand you correctly that you're wondering whether you should
stick to ZooKeeper HA on Kubernetes vs Kubernetes HA? You could argue that
ZooKeeper might be better since it's already supported for longer and,
therefore, better tested. The Kubernetes HA implementation left the
Hi Fuyao,
sorry for not replying earlier. The stop-with-savepoint operation shouldn't
only suspend but terminate the job. Is it that you might have a larger
state that makes creating the savepoint take longer? Even though,
considering that you don't experience this behavior with your 2nd solution,
Hi Abdullah,
is there a reason you're not considering triggering the stop-with-savepoint
operation through the REST API [1]? I'm not entirely sure whether I
understand you correctly: ./bin/flink is an executable. Why Would you
assume it to be shown as a directory? You would need to provide
Hi Iacovos,
unfortunately, it doesn't as the related FLINK-19343 [1] is not resolved,
yet. The release notes for Flink 1.13 can be found in [2].
Best,
Matthias
[1] https://issues.apache.org/jira/browse/FLINK-19343
[2] https://flink.apache.org/news/2021/05/03/release-1.13.0.html
On Mon, May 3,
.nabble.com/RESULT-VOTE-FLIP-36-Support-Interactive-Programming-in-Flink-Table-API-td45024.html
On Tue, May 4, 2021 at 8:12 AM Jack Kolokasis
wrote:
> Hi Matthias,
>
> Thank you for your reply. Are you going to include it in future versions?
>
> Best,
> Iacovos
> On 4/5/21 9:1
Hi Shipeng,
it looks like there is an open Jira issue FLINK-18202 [1] addressing this
topic. You might want to follow up on that one. I'm adding Timo and Jark to
this thread. They might have more insights.
Best,
Matthias
[1] https://issues.apache.org/jira/browse/FLINK-18202
On Sat, May 1, 2021
Hi Rainie,
the savepoint creation failed due to some tasks already being finished. It
looks like you ran into an issue that was (partially as FLINK-21066 [1] is
only a subtask of a bigger issue?) addressed in Flink 1.13 (see
FLINK-21066). I'm pulling Yun Gao into this thread. Let's see whether Yun
Hi Salva,
unfortunately, I have no experience with Bazel. Just by looking at the code
you shared I cannot come up with an answer either. Have you checked out the
ML thread in [1]? It provides two other examples where users used Bazel in
the context of Flink. This might give you some hints on where
Hi Ragini,
this is a dependency version issue. Flink 1.8.x does not support Hadoop 3,
yet. The support for Apache Hadoop 3.x was added in Flink 1.11 [1] through
FLINK-11086 [2]. You would need to upgrade to a more recent Flink version.
Best,
Matthias
[1]
Hi,
have tried using the bundled hadoop uber jar [1]. It looks like some Hadoop
dependencies are missing.
Best,
Matthias
[1] https://flink.apache.org/downloads.html#additional-components
On Wed, Feb 10, 2021 at 1:24 PM meneldor wrote:
> Hello,
> I am using PyFlink and I want to write records
Hi Daniel,
what's the exact configuration you used? Did you use the resource
definitions provided in the Standalone Flink on Kubernetes docs [1]? Did
you do certain things differently in comparison to the documentation?
Best,
Matthias
[1]
Hi narashima,
not sure whether this fits your use case, but have you considered creating
a savepoint and analyzing it using the State Processor API [1]?
Best,
Matthias
[1]
https://ci.apache.org/projects/flink/flink-docs-stable/dev/libs/state_processor_api.html#state-processor-api
On Wed, Feb
Hi Barisa,
thanks for sharing this. I'm gonna add Till to this thread. He might have
some insights.
Best,
Matthias
On Wed, Feb 10, 2021 at 4:19 PM Barisa Obradovic wrote:
> I'm trying to understand if behaviour of the flink jobmanager during
> zookeeper upgrade is expected or not.
>
> I'm
://ci.apache.org/projects/flink/flink-docs-release-1.12/deployment/ha/kubernetes_ha.html#configuration
On Wed, Feb 10, 2021 at 6:08 PM Matthias Pohl
wrote:
> Hi Daniel,
> what's the exact configuration you used? Did you use the resource
> definitions provided in the Standalone Flink on K
Hi Marco,
sorry for the late reply. The documentation you found [1] is already a good
start. You can define how many subtasks of an operator run in parallel
using the operator's parallelism configuration [2]. Each operator's subtask
will run in a separate task slot. There's the concept of slot
Hi Vishal,
what about the TM metrics' REST endpoint [1]. Is this something you could
use to get all the metrics for a specific TaskManager? Or are you looking
for something else?
Best,
Matthias
[1]
1. yes - the same key would affect the same state variable
2. you need a join to have the same operator process both streams
Matthias
On Wed, Mar 24, 2021 at 7:29 AM vishalovercome wrote:
> Let me make the example more concrete. Say O1 gets as input a data stream
> T1
> which it splits into
o quickly. I am looking for a sample
> example where I can increment counters on each stage #1 thru #3 for
> DATASTREAM.
> Then probably I can print it using slf4j.
>
> Thanks,
> Vijay
>
> On Tue, Mar 23, 2021 at 6:35 AM Matthias Pohl
> wrote:
>
>> Hi Vijaye
Hi Danesh,
thanks for reaching out to the Flink community. Checking the code, it looks
like the OutputStream is added to a CloseableRegistry before writing to it
[1].
My suspicion is - based on the exception cause - that the CloseableRegistry
got triggered while restoring the state. I tried to
name.
>
> I can use String, Int, Enum, Long type keys in the Key that I send in the
> Query getKvState … but the moment I introduce a TreeMap, even though it
> contains a simple one entry String, String, it doesn’t work …
>
> Thanks,
> Sandeep
>
> On 23-Mar-2021, at 7:00 P
Hi Vignesh,
are you trying to achieve an even distribution of tasks for this one
operator that has the parallelism set to 16? Or do you observe the
described behavior also on a job level?
I'm adding Chesnay to the thread as he might have more insights on this
topic.
Best,
Matthias
On Mon, Mar
Hi Vijayendra,
thanks for reaching out to the Flink community. What do you mean by
displaying it in your local IDE? Would it be ok to log the information out
onto stdout? You might want to have a look at the docs about setting up a
slf4j metrics report [1] if that's the case.
Best,
Matthias
[1]
.nabble.com/Evenly-Spreading-Out-Source-Tasks-tp42108p42235.html
On Tue, Mar 23, 2021 at 1:42 PM Matthias Pohl
wrote:
> Hi Vignesh,
> are you trying to achieve an even distribution of tasks for this one
> operator that has the parallelism set to 16? Or do you observe the
> described b
1 - 100 of 248 matches
Mail list logo