Hi,
According to
https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/dev/python/faq/
> When executing jobs in mini cluster(e.g. when executing jobs in IDE) ...
please remember to explicitly wait for the job execution to finish as these
APIs are asynchronous.
I hope my program will
Hello All,
I also checked the native-k8s’s automatically generated configmap. It only
contains the flink-conf.yaml, but no log4j.properties. I am not very familiar
with the implementation details behind native k8s.
That should be the root cause, could you check the implementation and help me
t
Hello Austin, Yang,
For the logging issue, I think I have found something worth to notice.
They are all based on base image flink:1.12.1-scala_2.11-java11
Dockerfile: https://pastebin.ubuntu.com/p/JTsHygsTP6/
In the JM and TM provisioned by the k8s operator. There is only flink-conf.yaml
in th
In one of my jobs, windowing is the costliest operation while upstream and
downstream operators are not as resource intensive. There's another operator
in this job that communicates with internal services. This has high
parallelism as well but not as much as that of the windowing operation.
Running
Hello All,
For Logging issue:
Hi Austin,
This is the my full code implementation link [1], I just removed the credential
related things. The operator definition can be found here.[2] You can also
check other parts if you find any problem.
The operator uses log4j2 and you can see there is a log
Hey all,
Thanks for the ping, Matthias. I'm not super familiar with the details of @Yang
Wang 's operator, to be honest :(. Can you share
some of your FlinkApplication specs?
For the `kubectl logs` command, I believe that just reads stdout from the
container. Which logging framework are you using
Chime in here since I work with Sihan.
Roman, there isn't much logs beyond this WARN, in fact it should be ERROR
since it fail our job and job has to restart.
Here is a fresh new example of "Sending the partition request to 'null'
failed." exception. The only log we see before exception was:
tim
Yes, thanks for managing the release, Dawid & Guowei! +1
On Tue, May 4, 2021 at 4:20 PM Etienne Chauchot
wrote:
> Congrats to everyone involved !
>
> Best
>
> Etienne
> On 03/05/2021 15:38, Dawid Wysakowicz wrote:
>
> The Apache Flink community is very happy to announce the release of Apache
> F
In Flink 1.11, there were some changes how the Flink clients dependency is
bundled in [1]. The error you're seeing is likely due to the flink-clients
module not being on the classpath anymore. Can you check your dependencies
and update the pom.xml as suggested in [1]?
Matthias
[1] https://flink.a
Could you describe a situation in which hand-tuning the parallelism of
individual operators produces significantly better throughput than the
default approach? I think it would help this discussion if we could have a
specific use case in mind where this is clearly better.
Regards,
David
On Tue, M
Yes, thanks a lot for driving this release Arvid :)
Piotrek
czw., 29 kwi 2021 o 19:04 Till Rohrmann napisał(a):
> Great to hear. Thanks a lot for being our release manager Arvid and to
> everyone who has contributed to this release!
>
> Cheers,
> Till
>
> On Thu, Apr 29, 2021 at 4:11 PM Arvid H
Forgot to add one more question - 7. If maxParallelism needs to be set to
control parallelism, then wouldn't that mean that we wouldn't ever be able
to take a savepoint and rescale beyond the configured maxParallelism? This
would mean that we can never achieve hand tuned resource efficient. I will
Congrats to everyone involved !
Best
Etienne
On 03/05/2021 15:38, Dawid Wysakowicz wrote:
|The Apache Flink community is very happy to announce the release of
Apache Flink 1.13.0.|
|Apache Flink® is an open-source stream processing framework for
distributed, high-performing, always-available,
Some questions about adaptive scheduling documentation - "If new slots become
available the job will be scaled up again, up to the configured
parallelism".
Does parallelism refer to maxParallelism or parallelism? I'm guessing its
the latter because the doc later mentions - "In Reactive Mode (see
Thank you very much for the new release that makes auto-scaling possible. I'm
currently running multiple flink jobs and I've hand tuned the parallelism of
each of the operators to achieve the best throughput. I would much rather
use the auto-scaling capabilities of flink than have to hand tune my j
Great! Feel free to post back if you run into anything else or come up with
a nice template – I agree it would be a nice thing for the community to
have.
Best,
Austin
On Tue, May 4, 2021 at 12:37 AM Salva Alcántara
wrote:
> Hey Austin,
>
> There was no special reason for vendoring using `bazel-
I have a flink cluster running in kubernetes, just the basic installation
with one JobManager and two TaskManagers. I want to interact with it via
command line from a separate container ie:
root@flink-client:/opt/flink# ./bin/flink list --target
kubernetes-application -Dkubernetes.cluster-id=job-m
Hi Flinksters,
Our repo which is a maven based java project(flink) went through SCA
scan using WhiteSource tool and following are the HIGH severity issues
reported. The target vulnerable jar is not found when we build the
dependency tree of the project.
Could any one let us know if flink uses the
Hi Timo,
Thanks for your answer. I think I wasn't clear enough in my initial
message, so let me give more details.
The stream is not keyed by timestamp, it's keyed by a custom field (e.g.,
user-id) and then fed into a KeyedProcessFunction. I want to process all
events for a given user in order, b
As you suggested I downloaded flink 1.11.3
to submit a flink job . The actual application is developed in flink 1.8.1.
Since the Hadoop cluster is 3.2.0 apache I downloaded flink 1.11.3 (
flink-1.11.3-bin-scala_2.11.tgz) and tried to submit the job.
while submitting facing the below mentioned e
Hi Team,
I am trying to submit a flink job of version 1.11.3 . The actual
application is developed in flink 1.8.1.
Since the Hadoop cluster is 3.2.0 apache I downloaded flink 1.11.3 (
flink-1.11.3-bin-scala_2.11.tgz) and tried to submit the job.
while submitting facing the below mentioned excepti
Thanks a lot to everybody who has contributed to the release, in particular
the release managers for running the show!
On Tue, May 4, 2021 at 8:54 AM Konstantin Knauf
wrote:
> Thank you Dawid and Guowei! Great job everyone :)
>
> On Mon, May 3, 2021 at 7:11 PM Till Rohrmann wrote:
>
>> This is
Hi,
My use case involves reading raw data records from Kafka and processing
them. The records are coming from a database, where a periodic job reads
new rows, packages them into a single JSON object (as described below) and
writes the entire record to Kafka.
{
'id': 'some_id',
'key_a': 'v
23 matches
Mail list logo