timmy-mathew-ah opened a new issue, #6533: URL: https://github.com/apache/camel-k/issues/6533
### Bug description Installing multiple global Camel-K operators via Helm for operator sharding fails on the second helm install due to hardcoded ClusterRole and ClusterRoleBinding names in [rbacs-descoped.yaml](https://github.com/apache/camel-k/blob/main/helm/camel-k/templates/rbacs-descoped.yaml), reference line [here](https://github.com/apache/camel-k/blob/3773bc49e5a9172b6cd01728e5add6fbeb4fbdc0/helm/camel-k/templates/rbacs-descoped.yaml#L453) and same issue with other ClusterRoleBindings defined after in the file. The [official documentation on multiple operators](https://camel.apache.org/camel-k/next/installation/advanced/multi.html) describes a multi-operator setup where each operator has a unique OPERATOR_ID and only reconciles resources annotated with the matching camel.apache.org/operator.id. However, the Helm chart cannot actually deploy this setup because all cluster-scoped RBAC resources use fixed names regardless of the Helm release name or operator.operatorId value. **First install** (succeeds): ```bash helm install camel-k-shard-1 camel-k/camel-k \ --create-namespace \ --namespace platform-camel-k-shard-1 \ --set-string operator.global=true \ --set operator.operatorId=camel-k-shard-1 ``` **Second install** (fails): ```bash helm install camel-k-shard-2 camel-k/camel-k \ --create-namespace \ --namespace platform-camel-k-shard-2 \ --set-string operator.global=true \ --set operator.operatorId=camel-k-shard-2 ``` **Error:** ``` Error: INSTALLATION FAILED: unable to continue with install: ClusterRole "camel-k-operator" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-name" must equal "camel-k-shard-2": current value is "camel-k-shard-1"; annotation validation error: key "meta.helm.sh/release-namespace" must equal "platform-camel-k-shard-2": current value is "platform-camel-k-shard-1" ``` Issue #6145 reported this and it was seemed to be fixed by PR #6191 which removed several unnecessary cluster-scoped resources. **However, the core problem still exists in `rbacs-descoped.yaml`** the template used when `operator.global=true`. All ClusterRole and ClusterRoleBinding names in this file are hardcoded and not parameterized by `{{ .Release.Name }}`, `{{ .Values.operator.operatorId }}`, or `{{ include "camel-k.fullname" . }}`. The namespace being dynamic proves the intent was to support multiple installs in different namespaces so can the metadata.name be fixed in a similar way as described below to use `camel-k-operator-{{ .Release.Name }}` ? Happy to open a PR. ``` apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: labels: app: camel-k name: camel-k-operator-{{ .Release.Name }} # ← unique per Helm release roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: camel-k-operator subjects: - kind: ServiceAccount name: camel-k-operator namespace: '{{ .Release.Namespace }}' ``` ### Camel K or runtime version v2.9.1 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
