This is an automated email from the ASF dual-hosted git repository.

pcongiusti pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/camel-k.git

commit ad79a4635c804d7f67431c9ebfbb8f0f501d7b61
Author: Pasquale Congiusti <[email protected]>
AuthorDate: Mon Jun 10 16:13:04 2024 +0200

    doc: kustomize
---
 docs/modules/ROOT/nav.adoc                         |   3 +-
 .../pages/installation/advanced/kustomize.adoc     | 100 --------------------
 .../ROOT/pages/installation/installation.adoc      | 103 +++++++++------------
 .../ROOT/pages/installation/uninstalling.adoc      |  62 ++++++++++++-
 docs/modules/ROOT/pages/installation/upgrade.adoc  |  37 ++++++--
 e2e/install/kustomize/setup_test.go                |  32 +++++--
 e2e/install/upgrade/kustomize_upgrade_test.go      |  11 +--
 .../kubernetes/descoped/kustomization.yaml         |   1 -
 .../kubernetes/namespaced/kustomization.yaml       |   3 +-
 .../integration-platform.yaml                      |   4 +-
 .../kustomization.yaml}                            |  23 +----
 .../manager}/patch-image-pull-policy-always.yaml   |   0
 12 files changed, 165 insertions(+), 214 deletions(-)

diff --git a/docs/modules/ROOT/nav.adoc b/docs/modules/ROOT/nav.adoc
index b8441accb..a70b946eb 100644
--- a/docs/modules/ROOT/nav.adoc
+++ b/docs/modules/ROOT/nav.adoc
@@ -1,12 +1,11 @@
 * xref:installation/installation.adoc[Installation]
-** xref:installation/advanced/maven.adoc[Configure Maven]
 ** xref:installation/registry/registry.adoc[Configure Registry]
+** xref:installation/advanced/maven.adoc[Configure Maven]
 ** xref:installation/knative.adoc[Configure Knative]
 ** xref:installation/upgrade.adoc[Upgrade]
 ** xref:installation/uninstalling.adoc[Uninstalling]
 ** xref:installation/advanced/advanced.adoc[Advanced]
 *** xref:installation/advanced/build-config.adoc[Build tuning]
-*** xref:installation/advanced/kustomize.adoc[Install Using Kustomize]
 *** xref:installation/advanced/network.adoc[Network architecture]
 *** xref:installation/advanced/resources.adoc[Resource management]
 *** xref:installation/advanced/multi.adoc[Multiple Operators]
diff --git a/docs/modules/ROOT/pages/installation/advanced/kustomize.adoc 
b/docs/modules/ROOT/pages/installation/advanced/kustomize.adoc
deleted file mode 100644
index 50b1bcdcb..000000000
--- a/docs/modules/ROOT/pages/installation/advanced/kustomize.adoc
+++ /dev/null
@@ -1,100 +0,0 @@
-[[kustomize]]
-= Installing with Kustomize
-
-https://kustomize.io[Kustomize] provides a declarative approach to the 
configuration customization of a
-Camel-K installation. Kustomize works either with a standalone executable or 
as a built-in to ``kubectl``.
-
-== File Location
-
-The https://github.com/apache/camel-k/tree/main/install[install] directory 
provides the configuration
-files for use with Kustomize. The following sub-directories are named to 
describe the purpose of their
-respective kustomization:
-
-* *setup-cluster*: install the cluster-level resources, inc. the 
ClusterResourceDefinitions
-* *setup*: install the roles and permissions required by the camel-k operator 
into the current namespace
-* *operator*: install the camel-k operator into the current namespace of a 
cluster
-* *platform*: install an instance of the camel-k integration-platform into the 
current namespace of a cluster
-* *example*: install an example integration into the current namespace of a 
cluster
-
-== Using kubectl
-
-The kustomization resources can be applied directly to a cluster using 
``kubectl``, eg.
- `kubectl -k setup-cluster`
-
-Due to its declarative nature, it is expected that the configuration files 
would be edited to suit the
-custom implementation. For example, when creating an integration-platform:
-
-* ``kustomization.yaml`` references configuration in 
``pkg/resources/config/samples/patch-integration-platform.yaml``
-* Edit this file according to installation requirements
-* Apply the resources by executing ``kubectl -k platform``
-
-== Using the Makefile
-
-For convenience, a Makefile is included in the install directory, providing a 
frontend interface for
-the most common installation procedures. By incorporating environment 
variables, it is able to update
-some of the configuration automatically before applying it to the cluster 
using ``kubectl``.
-
-The environment variable ``DRY_RUN`` can be used with a value of ``true`` to 
only display the prepared
-resources, allowing the user to check the prospective installation.
-
-A recent version of ``make`` is a pre-requisite and a familiarity with using
-https://www.gnu.org/software/make/manual/make.html[Makefiles] would be 
beneficial.
-
-The Makefile rules are described by executing ``make`` or ``make help``, eg.
-
-....
-Usage: make <PARAM1=val1 PARAM2=val2> <target>
-
-Available targets are:
-
-setup-cluster   Setup the cluster installation by installing crds and cluster 
roles.
-
-                Cluster-admin privileges are required.
-
-                NAMESPACE: Sets the namespace for the resources
-                PLATFORM:  Override the discovered platform, if required
-                DRY_RUN:   If 'true', prints the resources to be applied 
instead of applying them
-
-
-setup           Setup the installation by installing roles and granting 
privileges for the installing operator.
-
-                Calls setup-cluster
-                Cluster-admin privileges are required.
-
-                NAMESPACE: Sets the namespace for the resources
-                GLOBAL:    Converts all roles & bindings to cluster-level 
[true|false]
-                PLATFORM:  Override the discovered platform, if required
-                DRY_RUN:     If 'true', prints the resources to be applied 
instead of applying them
-
-operator        Install the operator deployment and related resources
-
-                Cluster-admin privileges are required.
-
-                NAMESPACE:          Set the namespace to install the operator 
into
-                PLATFORM:           Override the discovered platform, if 
required
-                GLOBAL:             Sets the operator to watch all namespaces 
for custom resources [true|false]
-                CUSTOM_IMAGE:       Set a custom operator image name
-                CUSTOM_VERSION:     Set a custom operator image version/tag
-                ALWAYS_PULL_IMAGES: Sets whether to always pull the operator 
image [true|false]
-                MONITORING:         Adds the prometheus monitoring resources
-                MONITORING_PORT:    Set a custom monitoring port
-                HEALTH_PORT:        Set a custom health port
-                LOGGING_LEVEL:      Set the level of logging [info|debug]
-                DRY_RUN:            Prints the resources to be applied instead 
of applying them
-
-
-platform        Install the integration platform
-
-                Cluster-admin privileges are required.
-
-                NAMESPACE: Set the namespace to install the operator into
-                PLATFORM:  Override the discovered platform, if required
-                DRY_RUN:   Prints the resources to be applied instead of 
applying them [true,false]
-
-
-example         Installs the example integration
-
-                NAMESPACE: Set the namespace to install the example into
-                PLATFORM:  Override the discovered platform, if required
-                DRY_RUN:   Prints the resources to be applied instead of 
applying them [true, false]
-....
diff --git a/docs/modules/ROOT/pages/installation/installation.adoc 
b/docs/modules/ROOT/pages/installation/installation.adoc
index a02a3b01e..310463f26 100644
--- a/docs/modules/ROOT/pages/installation/installation.adoc
+++ b/docs/modules/ROOT/pages/installation/installation.adoc
@@ -3,6 +3,11 @@
 
 Camel K allows us to run Camel integrations directly on a Kubernetes or 
OpenShift cluster. To use it, you need to be connected to a cloud environment 
or to a local cluster created for development purposes (ie, Minikube or Kind).
 
+[[registry]]
+== Registry requirements
+
+Camel K may require a container registry which is used to store the images 
built for your applications. Certain clusters may use their internal container 
registry (ie, Openshift, Minikube or 
https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry[KEP-1755
 compatible] clusters). If it's not the case for your cluster make sure to have 
a 
xref:installation/registry/registry.adoc#configuring-registry-install-time[container
  [...]
+
 [[cli]]
 == Installation via Kamel CLI
 
@@ -17,96 +22,80 @@ Once you have put the `kamel` CLI in the path, log into 
your cluster using the s
 $ kamel install --olm=false
 ----
 
-NOTE: if you're not using Minikube or Openshift, make sure to have a 
xref:installation/registry/registry.adoc#configuring-registry-install-time[container
 registry] available and use also `--registry` parameter.
-
 This will configure the cluster with the Camel K custom resource definitions 
and install the operator on the current namespace with the default settings.
 
 IMPORTANT: Custom Resource Definitions (CRD) are cluster-wide objects and you 
need admin rights to install them. Fortunately, this operation can be done 
*once per cluster*. So, if the `kamel install` operation fails, you'll be asked 
to repeat it when logged as admin.
 For CRC, this means executing `oc login -u system:admin` then `kamel install 
--cluster-setup` only for the first-time installation.
 
-[[kustomize]]
-== Installation via Kustomize
-
-Camel K can be installed using https://kustomize.io[Kustomize], providing an 
interface for configuring more advanced features.
+[[helm]]
+== Installation via Helm Hub
 
-**First you need to get the kustomize files**
+Camel K is also available in Helm Hub:
 
 ```
-# Clone the project repository
-$ https://github.com/apache/camel-k.git
-$ cd camel-k
-# You can use any release branch or skip this step to use it the last code on 
`main`
-$ git checkout release-a.b.x
-$ cd install
+$ helm repo add camel-k https://apache.github.io/camel-k/charts/
+$ helm install camel-k [--set platform.build.registry.address=<my-registry>] 
camel-k/camel-k
 ```
 
-**Next you need to apply configuration at cluster level**
+More instructions on the https://hub.helm.sh/charts/camel-k/camel-k[Camel K 
Helm] page.
 
-```
-$ kubectl kustomize --load-restrictor LoadRestrictionsNone setup-cluster/ | 
kubectl create -f -
-```
+[[olm]]
+== Installation via Operator Hub
 
-**Then the roles and privileges needs to be added**
+Camel K is also available in Operator Hub. You will need the OLM framework to 
be properly installed in your cluster. More instructions on the 
https://operatorhub.io/operator/camel-k[Camel K Operator Hub] page.
 
 ```
-$ kubectl apply -k setup
-$ kubectl apply -k pkg/resources/config/rbac/namespaced
-# For openshift
-$ kubectl apply -k pkg/resources/config/rbac/openshift
-$ kubectl apply -k pkg/resources/config/rbac/openshift/namespaced
+$ kubectl create -f https://operatorhub.io/install/camel-k.yaml
 ```
 
-Should you want your operator operator to watch all namespaces (global 
operator), you will replace `pkg/resources/config/rbac/namespaced` by 
`pkg/resources/config/rbac/descoped` and 
`pkg/resources/config/rbac/openshift/namespaced` by 
`pkg/resources/config/rbac/openshift/descoped`.
+You can edit the `Subscription` custom resource, setting the channel you want 
to use. From Camel K version 2 onward, we're going to provide an installation 
channel for each major version we're releasing (ie, `stable-v2`). This will 
simplify the upgrade process if you choose to perform an automatic upgrade.
 
-**Finally the operator can be deployed**
+NOTE: Some Kubernetes clusters such as Openshift may let you to perform the 
same operation from a GUI as well. Refer to the cluster instruction to learn 
how to perform such action from user interface.
 
-```
-$ kubectl apply -k operator
-$ kubectl apply -k platform
-```
+[[kustomize]]
+== Installation via Kustomize
+
+https://kustomize.io[Kustomize] provides a declarative approach to the 
configuration customization of a Camel-K installation. Kustomize works either 
with a standalone executable or as a built-in to `kubectl`. The 
https://github.com/apache/camel-k/tree/main/install[/install] directory 
provides a series of base and overlays configuration that you can use. You can 
create your own overlays or customize the one available in the repository to 
accommodate your need.
 
-By default the operator is configured to get the registry information from a 
Configmap expected the namespace `kube-public` like this example:
+=== One liner operator installation procedure
+
+If you don't need to provide any configuration nor the registry (ie, in 
Openshift), you can apply this simple one liner:
 
 ```
-apiVersion: v1
-kind: ConfigMap
-metadata:
-  name: local-registry-hosting
-  namespace: kube-public
-data:
-  localRegistryHosting.v1: |
-    hostFromContainerRuntime: "registry:5000"
+$ kubectl apply -k 
github.com/apache/camel-k/install/overlays/kubernetes/descoped?ref=v2.4.0 
--server-side
 ```
 
-NOTE: you probably want to edit the configuration. Please, do any change right 
after cloning the repository. Be careful to avoid making any modification in 
the `install/config` folder.
+You can specify as `ref` parameter the version you're willing to install (ie, 
`v2.4.0`). The command above will install a descoped (global) operator in the 
camel-k namespace.
 
-More information on the xref:installation/advanced/kustomize.adoc[Kustomize 
Camel K installation procedure] page.
+NOTE: if you're not installing in Openshift you will need to manually change 
the IntegrationPlatform registry configuration as the operator won't be able to 
find any valid registry address.
 
-[[olm]]
-== Installation via Operator Hub
+=== Custom configuration procedure
 
-Camel K is also available in Operator Hub. You will need the OLM framework to 
be properly installed in your cluster. More instructions on the 
https://operatorhub.io/operator/camel-k[Camel K Operator Hub] page.
+Most often you want to specify different parameters to configure the registry 
and other platform behaviors. In such case you can clone the project repository 
and use any of the overlays available, customizing to your needs.
 
 ```
-$ kubectl create -f https://operatorhub.io/install/camel-k.yaml
+# Clone the project repository
+$ https://github.com/apache/camel-k.git
+$ cd camel-k
+# You can use any release tag (recommended as it is immutable) or branch
+$ git checkout v2.4.0
+$ cd install/overlays
 ```
 
-You can edit the `Subscription` custom resource, setting the channel you want 
to use. From Camel K version 2 onward, we're going to provide an installation 
channel for each major version we're releasing (ie, `stable-v2`). This will 
simplify the upgrade process if you choose to perform an automatic upgrade.
-
-NOTE: Some Kubernetes clusters such as Openshift (or CRC) may let you to 
perform the same operation from a GUI as well. Refer to the cluster instruction 
to learn how to perform such action.
-
-
-[[helm]]
-== Installation via Helm Hub
-
-Camel K is also available in Helm Hub:
+In this directory you may find a series of default configuration for 
Kubernetes, Openshift and any other sensible profile. For Kubernetes, you can 
see we have prepared a `descoped` configuration and a `namespaced` which are 
installing the operator globally or in a specific namespace.
 
 ```
-$ helm repo add camel-k https://apache.github.io/camel-k/charts/
-$ helm install my-camel-k camel-k/camel-k
+# Default, use this namespace (edit `kustomize.yaml` to change it)
+$ kubectl create ns camel-k
+$ kubectl apply -k kubernetes/descoped --server-side
+# Change the registry address (edit the file for more configuration if 
required)
+$ sed -i 's/address: .*/address: my-registry-host.io/' 
kubernetes/descoped/integration-platform.yaml
+$ kubectl apply -k platform
 ```
 
-More instructions on the https://hub.helm.sh/charts/camel-k/camel-k[Camel K 
Helm] page.
+NOTE: you don't need to set the platform if running on Openshift.
+
+The above command will install a global Camel K operator in the `camel-k` 
namespace using the container registry you've provided. The `server-side` 
option is required in order to prevent some error while installing CRDs. We 
need to apply a separate platform configuration as Kustomize may not be yet 
aware of the CRDs if done in the same step.
 
 [[test]]
 == Test your installation
@@ -128,8 +117,6 @@ Camel K installation is usually straightforward, but for 
certain cluster types y
 - xref:installation/platform/openshift.adoc[OpenShift]
 - xref:installation/platform/crc.adoc[Red Hat CodeReady Containers (CRC)]
 
-NOTE: Minishift is no longer supported since Camel K 1.5.0. You can use 
xref:installation/platform/crc.adoc[CRC] for a local OpenShift cluster.
-
 [[fine-tuning]]
 == Fine Tuning
 
diff --git a/docs/modules/ROOT/pages/installation/uninstalling.adoc 
b/docs/modules/ROOT/pages/installation/uninstalling.adoc
index c236dbee7..7cd1f746f 100644
--- a/docs/modules/ROOT/pages/installation/uninstalling.adoc
+++ b/docs/modules/ROOT/pages/installation/uninstalling.adoc
@@ -1,7 +1,10 @@
 [[uninstalling]]
 = Uninstalling Camel K
 
-We're sad to see you go, but If you really need to, it is possible to 
completely uninstall Camel K from OpenShift or Kubernetes with the following 
command:
+We're sad to see you go, but If you really need to, it is possible to 
completely uninstall Camel K from your cluster. The uninstalling procedure 
typically removes the operator but keeps Custom Resource Definition and any 
Integration which was previously running. They can be removed by the user by an 
additional cleaning operation.
+
+[[cli]]
+== Uninstall via Kamel CLI
 
 [source]
 ----
@@ -12,11 +15,66 @@ This will uninstall all Camel K resources along with the 
operator from the clust
 
 NOTE:  By _default_ the resources possibly shared between clusters such as 
https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources[CustomResourceDefinitions
 (CRD)], 
https://kubernetes.io/docs/reference/access-authn-authz/rbac[ClusterRole] and 
https://docs.openshift.com/container-platform/4.1/applications/operators/olm-understanding-olm.html[Operator
 Lifecycle Manager(OLM)] will be  **excluded**. To force the inclusion of all 
resources you can use the **--all* [...]
 
+[[helms]]
+== Uninstall via Helm
+
+The Helm procedure takes care to delete only the operator Deployment:
+
+```
+$ helm uninstall camel-k
+```
+
+Check instructions on https://hub.helm.sh/charts/camel-k/camel-k[Camel K Helm] 
page to remove CRDs and any other installation resource.
+
+[[operatorhub]]
+== Uninstall via Operator Hub
+
+In order to uninstall via OLM, you'll need to identify and remove the 
Subscription custom resource related to Camel K. Check instructions on 
https://olm.operatorframework.io/docs/tasks/uninstall-operator/[uninstall an 
operator] page from OLM.
+
+[[kustomize]]
+== Uninstall via Kustomize
+
+Uninstalling via Kustomize may require you to store the configuration you've 
used at install time and delete the applied resources. However this is 
something we discourage as it may remove also the application that are running 
and you may not want to delete (see generic cleaning for an alternative 
approach).
+
+WARNING: this operation may remove CRDs and any application that is still 
running.
+
+```
+$ kustomize build 'overlays/my-configuration' | kubectl delete -f -
+```
+
+[[generic]]
+== Uninstall cleaning cluster resources
+
+Another alternative is to delete the resources the operator is using in a 
controlled way by cleaning them one by one.
+
+== Uninstall operator only (keeps CRDs and any running Integration)
+
+In order to remove the operator and any configuration resource it uses you'll 
need to perform the following cleaning operation:
+
+```
+$ kubectl delete 
deploy,configmap,secret,sa,rolebindings,clusterrolebindings,roles,clusterroles,integrationplatform
 -l app=camel-k
+```
+
+NOTE: CRDs and Integration will be maintained alive and running.
+
+== Uninstall CRDs (and running Integration)
+
+In order to remove the CRDs you need to execute:
+
+```
+$ kubectl delete crd -l app=camel-k
+```
+
+NOTE: Integration will be garbage collected by the cluster and so any running 
application.
+
+[[verify]]
+== Verify your cluster
+
 To verify that all resources have been removed you can use the following 
command:
 
 [source]
 ----
-kubectl get 
all,pvc,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd
 -l 'app=camel-k'
+kubectl get 
all,configmap,rolebindings,clusterrolebindings,secrets,sa,roles,clusterroles,crd
 -l 'app=camel-k'
 NAME                                   READY   STATUS        RESTARTS   AGE
 clusterrole.rbac.authorization.k8s.io/camel-k:edit   2020-05-28T20:31:39Z
 
diff --git a/docs/modules/ROOT/pages/installation/upgrade.adoc 
b/docs/modules/ROOT/pages/installation/upgrade.adoc
index eb5cf6a18..95dccc2b3 100644
--- a/docs/modules/ROOT/pages/installation/upgrade.adoc
+++ b/docs/modules/ROOT/pages/installation/upgrade.adoc
@@ -1,23 +1,24 @@
 [[upgrade]]
-= Upgrading Camel K
+= Upgrade Camel K
 
-Camel K is delivering new features with each new release, so, you'll be 
probably running the upgrade process quite often. OLM installation method gives 
you the possibility to even perform this operation automatically, selecting the 
auto-upgrade feature when installing. Here we're providing the steps required 
for non-OLM installation procedure. This is working when using CLI but it can 
be adapted to any other installation methodology.
+Camel K is delivering new features with each new release, so, you'll be 
probably running the upgrade process quite often. OLM installation method gives 
you the possibility to even perform this operation automatically, selecting the 
auto-upgrade feature when installing. The upgrade operation will install all 
the required configuration for the new operator version, replacing the previous 
one. Mind that the `Integration` resources running won't be affected, so they 
will keep running with th [...]
 
-When a new release is available, you need to perform a forcefully installation 
on top of the existing installation. This is required in order to upgrade the 
CRDs and any other configuration required by the new operator. You need to 
replace your `kamel` CLI binary with the new one released. Once you have 
replaced your `kamel` you can proceed with the installation:
+NOTE: you must notice that the deployment resources linked to an Integration 
(ie, Deployment, Knative-Service or CronJob) can change, if the new operator is 
setting any new configuration. This would lead to a transparent Pod rollout for 
all the existing Integrations at their very first reconciliation loop cycle 
(when the new operator will takeover from the previous one).
+
+[[cli]]
+== Upgrade via Kamel CLI
+
+The CLI needs to perform a forcefully installation on top of the existing 
installation. This is required in order to upgrade the CRDs and any other 
configuration in the new operator. You need to replace your `kamel` CLI binary 
with the new one released. Once you have replaced your new `kamel` you can 
proceed with the installation:
 
 [source]
 ----
 kamel install --force --olm=false
 ----
 
-This operation will install all the required configuration for the new 
operator version, replacing the previous one. Mind that the `Integration` 
resources running won't be affected, so they will keep running with the default 
runtime details provided in the previous operator version.
-
-However you must notice that the deployment resources linked to an Integration 
(ie, Deployment, Knative-Service or CronJob) can change, if the new operator is 
setting any new configuration. This would lead to a transparent Pod rollout for 
all the existing Integrations at their very first reconciliation loop cycle 
(when the new operator will takeover from the previous one).
+[[helms]]
+== Upgrade via Helm
 
-[[helms-crds]]
-== CRD Upgrades (Helm upgrade)
-
-Generally, when upgrading a patch or a minor version, we may introduce slight 
non-breaking compatibility changes in CRDs. These changes should be onboarded 
with the installation procedure you're using (CLI, OLM). However, you may want 
to control the upgrade of CRDs (for instance, upgrading in Helm, which, does 
not support CRDs upgrade out of the box). In this case, before doing the 
upgrade, you'll need to manually upgrade the CRDs, in order to use the new 
parameters expected by the new o [...]
+Generally, when upgrading a patch or a minor version, we may introduce slight 
non-breaking compatibility changes in CRDs. These changes should be onboard-ed 
with the installation procedure you're using (CLI, OLM, Kustomize). However, 
you may want to control the upgrade of CRDs (for instance, upgrading in Helm, 
which, does not support CRDs upgrade out of the box). In this case, before 
doing the upgrade, you'll need to manually upgrade the CRDs, in order to use 
the new parameters expected  [...]
 
 ```bash
 # Upgrade the CRDs
@@ -28,6 +29,22 @@ $ kubectl replace -f camel-k/crds
 $ helm upgrade camel-k/camel-k --version x.y.z
 ```
 
+[[operatorhub]]
+== Upgrade via Operator Hub
+
+Upgrading via https://operatorhub.io/[Operator Hub] may be automatically done 
by the cluster if this option was set at installation time. If not, you need to 
follow the instructions in the https://operatorhub.io/operator/camel-k[Camel K 
Operator Hub] page.
+
+[[kustomize]]
+== Upgrade via Kustomize
+
+If you want to upgrade via https://kustomize.io[Kustomize] you'll need to 
execute the same installation procedure you did for the previous version and 
add the `--force-conflicts` flag which will take care to overwrite any 
conflicting configuration (ie, rewriting the CRDs). Here an example for a 
descoped (global) installation procedure:
+
+```
+$ kubectl apply -k 
github.com/apache/camel-k/install/overlays/kubernetes/descoped?ref=v2.4.0 
--server-side --force-conflicts
+```
+
+NOTE: you may need to perform more configuration to reflect the same 
customization configuration done in the previous version installation.
+
 [[refresh-integrations]]
 == Refresh integrations
 
diff --git a/e2e/install/kustomize/setup_test.go 
b/e2e/install/kustomize/setup_test.go
index 90db17876..30e4ecb21 100644
--- a/e2e/install/kustomize/setup_test.go
+++ b/e2e/install/kustomize/setup_test.go
@@ -60,19 +60,25 @@ func TestKustomizeNamespaced(t *testing.T) {
                                fmt.Sprintf("s/namespace: .*/namespace: %s/", 
ns),
                                
fmt.Sprintf("%s/overlays/kubernetes/namespaced/kustomization.yaml", 
kustomizeDir),
                        ))
+               ExpectExecSucceed(t, g, Kubectl(
+                       "apply",
+                       "-k",
+                       fmt.Sprintf("%s/overlays/kubernetes/namespaced", 
kustomizeDir),
+                       "--server-side",
+               ))
                ExpectExecSucceed(t, g,
                        exec.Command(
                                "sed",
                                "-i",
                                fmt.Sprintf("s/address: .*/address: %s/", 
registry),
-                               
fmt.Sprintf("%s/overlays/kubernetes/namespaced/integration-platform.yaml", 
kustomizeDir),
+                               
fmt.Sprintf("%s/overlays/platform/integration-platform.yaml", kustomizeDir),
                        ))
-
                ExpectExecSucceed(t, g, Kubectl(
                        "apply",
                        "-k",
-                       fmt.Sprintf("%s/overlays/kubernetes/namespaced", 
kustomizeDir),
-                       "--server-side",
+                       fmt.Sprintf("%s/overlays/platform", kustomizeDir),
+                       "-n",
+                       ns,
                ))
                // Refresh the test client to account for the newly installed 
CRDs
                RefreshClient(t)
@@ -106,7 +112,7 @@ func TestKustomizeNamespaced(t *testing.T) {
                // Test operator only uninstall
                ExpectExecSucceed(t, g, Kubectl(
                        "delete",
-                       
"deploy,configmap,secret,sa,rolebindings,clusterrolebindings,roles,clusterroles",
+                       
"deploy,configmap,secret,sa,rolebindings,clusterrolebindings,roles,clusterroles,integrationplatform",
                        "-l",
                        "app=camel-k",
                        "-n",
@@ -152,19 +158,25 @@ func TestKustomizeDescoped(t *testing.T) {
                                fmt.Sprintf("s/namespace: .*/namespace: %s/", 
ns),
                                
fmt.Sprintf("%s/overlays/kubernetes/descoped/kustomization.yaml", kustomizeDir),
                        ))
+               ExpectExecSucceed(t, g, Kubectl(
+                       "apply",
+                       "-k",
+                       fmt.Sprintf("%s/overlays/kubernetes/descoped", 
kustomizeDir),
+                       "--server-side",
+               ))
                ExpectExecSucceed(t, g,
                        exec.Command(
                                "sed",
                                "-i",
                                fmt.Sprintf("s/address: .*/address: %s/", 
registry),
-                               
fmt.Sprintf("%s/overlays/kubernetes/descoped/integration-platform.yaml", 
kustomizeDir),
+                               
fmt.Sprintf("%s/overlays/platform/integration-platform.yaml", kustomizeDir),
                        ))
-
                ExpectExecSucceed(t, g, Kubectl(
                        "apply",
                        "-k",
-                       fmt.Sprintf("%s/overlays/kubernetes/descoped", 
kustomizeDir),
-                       "--server-side",
+                       fmt.Sprintf("%s/overlays/platform", kustomizeDir),
+                       "-n",
+                       ns,
                ))
 
                // Refresh the test client to account for the newly installed 
CRDs
@@ -217,7 +229,7 @@ func TestKustomizeDescoped(t *testing.T) {
                        // Test operator only uninstall
                        ExpectExecSucceed(t, g, Kubectl(
                                "delete",
-                               
"deploy,configmap,secret,sa,rolebindings,clusterrolebindings,roles,clusterroles",
+                               
"deploy,configmap,secret,sa,rolebindings,clusterrolebindings,roles,clusterroles,integrationplatform",
                                "-l",
                                "app=camel-k",
                                "-n",
diff --git a/e2e/install/upgrade/kustomize_upgrade_test.go 
b/e2e/install/upgrade/kustomize_upgrade_test.go
index e3bbdbdfb..2e6736ef0 100644
--- a/e2e/install/upgrade/kustomize_upgrade_test.go
+++ b/e2e/install/upgrade/kustomize_upgrade_test.go
@@ -66,7 +66,7 @@ func TestKustomizeUpgrade(t *testing.T) {
                g.Eventually(CRDs(t)).Should(HaveLen(0))
 
                // Should both install the CRDs and kamel in the given namespace
-               g.Expect(Kamel(t, ctx, "install", "-n", ns, 
"--global").Execute()).To(Succeed())
+               g.Expect(Kamel(t, ctx, "install", "-n", ns, "--global", 
"--olm=false", "--force").Execute()).To(Succeed())
                // Check the operator pod is running
                g.Eventually(OperatorPodPhase(t, ctx, ns), 
TestTimeoutMedium).Should(Equal(corev1.PodRunning))
                // Refresh the test client to account for the newly installed 
CRDs
@@ -101,19 +101,12 @@ func TestKustomizeUpgrade(t *testing.T) {
                                        fmt.Sprintf("s/namespace: .*/namespace: 
%s/", ns),
                                        
fmt.Sprintf("%s/overlays/kubernetes/descoped/kustomization.yaml", kustomizeDir),
                                ))
-                       ExpectExecSucceed(t, g,
-                               exec.Command(
-                                       "sed",
-                                       "-i",
-                                       fmt.Sprintf("s/address: .*/address: 
%s/", registry),
-                                       
fmt.Sprintf("%s/overlays/kubernetes/descoped/integration-platform.yaml", 
kustomizeDir),
-                               ))
-
                        ExpectExecSucceed(t, g, Kubectl(
                                "apply",
                                "-k",
                                fmt.Sprintf("%s/overlays/kubernetes/descoped", 
kustomizeDir),
                                "--server-side",
+                               "--wait",
                                "--force-conflicts",
                        ))
 
diff --git a/install/overlays/kubernetes/descoped/kustomization.yaml 
b/install/overlays/kubernetes/descoped/kustomization.yaml
index 1f0e96900..421242eae 100644
--- a/install/overlays/kubernetes/descoped/kustomization.yaml
+++ b/install/overlays/kubernetes/descoped/kustomization.yaml
@@ -20,7 +20,6 @@ kind: Kustomization
 resources:
 - ../../../base
 - ../../../base/config/rbac/descoped
-- integration-platform.yaml
 
 namespace: camel-k
 
diff --git a/install/overlays/kubernetes/namespaced/kustomization.yaml 
b/install/overlays/kubernetes/namespaced/kustomization.yaml
index 54d318512..c5dbe2095 100644
--- a/install/overlays/kubernetes/namespaced/kustomization.yaml
+++ b/install/overlays/kubernetes/namespaced/kustomization.yaml
@@ -20,11 +20,10 @@ kind: Kustomization
 resources:
 - ../../../base
 - ../../../base/config/rbac/namespaced
-- integration-platform.yaml
 
 namespace: default
 
-# You can provide any required adjustement here. Take the following as 
references:
+# You can provide any required adjustments here. Take the following as 
references:
 # patchesStrategicMerge:
 # - patch-toleration.yaml
 # - patch-node-selector.yaml
diff --git a/install/overlays/kubernetes/descoped/integration-platform.yaml 
b/install/overlays/platform/integration-platform.yaml
similarity index 94%
rename from install/overlays/kubernetes/descoped/integration-platform.yaml
rename to install/overlays/platform/integration-platform.yaml
index 05136f15b..01ef3cb6f 100644
--- a/install/overlays/kubernetes/descoped/integration-platform.yaml
+++ b/install/overlays/platform/integration-platform.yaml
@@ -19,13 +19,15 @@ apiVersion: camel.apache.org/v1
 kind: IntegrationPlatform
 metadata:
   name: camel-k
+  labels:
+    app: "camel-k"
 spec:
   build:
     # Registry is required unless your cluster has KEP-1755 enabled and you 
want to use the local registry.
     # This is a feature recommended for development purpose only.
     # more info at 
https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
     registry:
-      # For minikube local cluster you can enable one with
+      # For minikube local cluster you can enable a local registry with
       #
       # $ minikube addons enable registry
       #
diff --git a/install/overlays/kubernetes/namespaced/integration-platform.yaml 
b/install/overlays/platform/kustomization.yaml
similarity index 56%
rename from install/overlays/kubernetes/namespaced/integration-platform.yaml
rename to install/overlays/platform/kustomization.yaml
index 05136f15b..18ba2342b 100644
--- a/install/overlays/kubernetes/namespaced/integration-platform.yaml
+++ b/install/overlays/platform/kustomization.yaml
@@ -14,23 +14,8 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 # ---------------------------------------------------------------------------
+apiVersion: kustomize.config.k8s.io/v1beta1
+kind: Kustomization
 
-apiVersion: camel.apache.org/v1
-kind: IntegrationPlatform
-metadata:
-  name: camel-k
-spec:
-  build:
-    # Registry is required unless your cluster has KEP-1755 enabled and you 
want to use the local registry.
-    # This is a feature recommended for development purpose only.
-    # more info at 
https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
-    registry:
-      # For minikube local cluster you can enable one with
-      #
-      # $ minikube addons enable registry
-      #
-      # and get the value from
-      # $ kubectl -n kube-system get service registry -o 
jsonpath='{.spec.clusterIP}'
-      #
-      address: registry-host.io
-      insecure: true
+resources:
+- integration-platform.yaml
diff --git 
a/install/overlays/kubernetes/namespaced/patch-image-pull-policy-always.yaml 
b/pkg/resources/config/manager/patch-image-pull-policy-always.yaml
similarity index 100%
rename from 
install/overlays/kubernetes/namespaced/patch-image-pull-policy-always.yaml
rename to pkg/resources/config/manager/patch-image-pull-policy-always.yaml

Reply via email to