Sorry, let me explain. I currently have the operator deployed and managed via
ArgoCD. The CRDs I separated out into a different chart so I can do upgrades on
them. I am working on upgrading from version 1.7.0 to 1.8.0 using ArgoCD. What
I’ve done is replace the CRDs in the separate chart and mad
Hey!
We have not observed any issue so far, can you please share some error
information / log ?
Opening a jira ticket would be best
Thanks
Gyula
On Thu, 9 May 2024 at 21:18, Prasad, Neil
wrote:
> I am writing to report an issue with the Flink Kubernetes Operator version
> 1.8.0. The CRD is una
Hi,
What do you mean exactly by cannot be applied or replaced? What exactly is
the issue?
Are you installing fresh or trying to upgrade from a previous version? If
the latter please follow this:
https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-release-1.8/docs/operations/upgrade/
I am writing to report an issue with the Flink Kubernetes Operator version
1.8.0. The CRD is unable to be applied or replaced in minikube or GKE. However,
the CRD works on version 1.7.0 of the operator. I thought it would be helpful
to bring this issue to the attention of the community and get s
Hello,
I am trying to run a Flink Session Job with a jar that is hosted on a maven
repository in Google's Artifact Registry.
The first thing I tried was to just specify the `jarURI` directly:
apiVersion: flink.apache.org/v1beta1
kind: FlinkSessionJob
metadata:
name: myJobName
spec:
deployme
Hi,
We use S3 as our datastore for checkpoint/savepoints, and following an S3 error
we saw that exception:
```
java.io.IOException: GET operation failed: Could not transfer error message
at
org.apache.flink.runtime.blob.BlobClient.getInternal(BlobClient.java:231)
at
org.apache.
Hi Kush
Unfortunately there is currently no real Redis connector maintained by the
Flink community. I am aware that Bahir's version might be outdated but we
are currently working on a community supported connector[1]
1-https://github.com/apache/flink-connector-redis-streams
Best Regards
Ahmed Hamd
We have a source/sink mechanism which uses checkpoints for persistence and
can operate in a minor data loss scenario. Is there a method to use
checkpoints (to enable use of those source/sink operators) while disabling
stateful recovery during restarts?
Our setup uses Flink 1.16.1 alongside Flink K
Hi Abhi,
> We see that even when all the Taskslots of that particular operator are
stuck in an INITIALISING state
Can you include the stack trace of these threads so that we can understand
what the operators are stuck on INITIALISING?
Regards
Keith
On Thu, May 9, 2024 at 6:58 AM Abhi Sagar Khat