Hi Team!
This is probably something for after the release but I created a simple
prototype for the scaling subresource based on taskmanager replica count.
You can take a look here:
https://github.com/apache/flink-kubernetes-operator/pull/227
After some consideration I decided against using paral
Hi Alexis,
Sorry for the late response. I come from the reply in FLINK-27504[1].
The MAINFEST file in RocksDB records history of version changes.
In other words, once a new SST file created or an old file deleted via
compaction, it will create a new version in RocksDB, which will update the
MAI
I am running a Flink application in Java that performs window aggregation.
The query runs successfully on Flink 1.14.4. However, after upgrading to
Flink 1.15.0 and switching the code to use Windowing TVF, it fails with a
runtime error as planner can not compile and instantiate window Aggs
Handler
I have checkpoints setup against s3 using the hadoop plugin. (I'll
migrate to presto at some point) I've setup entropy injection per the
documentation with
state.checkpoints.dir: s3://my-bucket/_entropy_/my-job/checkpoints
s3.entropy.key: _entropy_
I'm seeing some behavior that I don't quite unde
Hi Őrhidi,
Thank you for helping out. I didn't try it on other k8s clusters. Our team
is on the whole GKE environment. Is the psp the possible cause? I have
given the secret volume in the psp, but not working.
Best,
*Xiao Ma*
*Geotab*
Software Developer, Data Engineering | B.Sc, M.Sc
Direct
Also forgot to attach the information regarding how did I convert the avro
objects to bytes in the approach that I took with deprecated kafka producer.
protected byte[] getValueBytes(Value value)
{
DatumWriter valWriter = new SpecificDatumWriter(
Value.getSchema());
Hi Team,
This is regarding Flink Kafka Sink. We have a usecase where we have headers and
both key and value as Avro Schema.
Below is the expectation in terms of intuitiveness for avro kafka key and value:
KafkaSink.>builder()
.setBootstrapServers(cloudkafkaBrokerAPI)
> On 18. May 2022, at 15:34, Alexis Sarda-Espinosa
> wrote:
>
> Hi David,
>
> Please refer to https://issues.apache.org/jira/browse/FLINK-21752
>
> Regards,
> Alexis.
Hi Alexis and Hangxiang,
thank you both for you quick responses. Following Alexis' link, I noticed, that
we were still on
Hi, David.
Removing a field from a POJO should work as you said. But I think we need
more information.
What version of flink are you using?
Do you have any other modifications?
Could you share your code segments and the error jm log if convenient ?
On Wed, May 18, 2022 at 9:07 PM David Jost wrote
Hi David,
Please refer to https://issues.apache.org/jira/browse/FLINK-21752
Regards,
Alexis.
-Original Message-
From: David Jost
Sent: Mittwoch, 18. Mai 2022 15:07
To: user@flink.apache.org
Subject: Schema Evolution of POJOs fails on Field Removal
Hi,
we currently have an issue, wher
Hi Anitha,
If I understand correctly, your JM/TM process memory is larger than the maximum
physical memory(i.e. 4m > 32*1024=32768m). So for a normally configured
YARN cluster, it should be impossible to launch the Flink JM/TM on worker nodes
due to the limit of `yarn.scheduler.maximum-allo
Hi,
we currently have an issue, where our job fails to restart from a savepoint,
after we removed a field from a serialised (POJO) class. According to [0], this
kind of evolution is supported, but it sadly only works when adding, but not
removing fields.
I was hoping, someone here might be abl
Hello Matthias,
Thanks for your reply. Yes indeed your are correct. My /tmp path is private so
you have confirmed what I thought was happening.
I have some follow up questions:
- why do taskmanagers create the chk-x directory but only the jobmanager can
delete it? Shouldn’t the jobmanager be th
Hi,
We are using below command to submit a flink application job to GCP
dataproc cluster using Yarn.
*flink run-application -t yarn-application .jar*
Our Cluster have 1 master node with 64 GB and 10 worker nodes of 32 GB.
The flink configurations given are:
*jobmanager.memory.process.size: 400
Hi everyone,
ApacheCon Asia [1] will feature the Streaming track for the second year.
Please don't hesitate to submit your proposal if there is an interesting
project or Flink experience you would like to share with us!
The conference will be online (virtual) and the talks will be pre-recorded.
T
Hey Danny,
Thanks for getting back to me.
- You are seeing bursty throughput, but the job is keeping up? There is no
backpressure? --> Correct I'm not seeing any backpressure in any of the
metrics
- What is the throughput at the sink? --> num of records out -- 1100 per 10
seconds
- On the graph scr
Dear Team,
I am new to pyflink and request for your support in issue I am facing with
Pyflink. I am using Pyflink version 1.14.4.
My Question is : I am using Distributed Kafka System ,Pyflink and wants to
collect data From a particular Partition of a Topic using Pyflink
FlinkKafkaConsumer . I
Hi,
We are using flink version 1.13 with a kafka source and a kinesis sink with
a parallelism of 3.
On submitting the job I get this error
Could not copy native binaries to temp directory
/tmp/amazon-kinesis-producer-native-binaries
Followed by permission denied even though all the permissions ha
18 matches
Mail list logo