This is an automated email from the ASF dual-hosted git repository.
ricardozanini pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-kie-kogito-docs.git
The following commit(s) were added to refs/heads/main by this push:
new 245b4c772 Issue-596: Holistic review of non-quarkus cloud chapter
(#625)
245b4c772 is described below
commit 245b4c772fc0c67a5f8fe5e3afca84c79f48ae7e
Author: Dominik HanĂ¡k <[email protected]>
AuthorDate: Thu May 23 17:00:06 2024 +0200
Issue-596: Holistic review of non-quarkus cloud chapter (#625)
---
serverlessworkflow/modules/ROOT/nav.adoc | 2 +-
.../operator/add-custom-ca-to-a-workflow-pod.adoc | 13 ++-
.../cloud/operator/build-and-deploy-workflows.adoc | 24 +++--
.../cloud/operator/building-custom-images.adoc | 4 +-
.../configuring-knative-eventing-resources.adoc | 14 ++-
.../cloud/operator/configuring-workflows.adoc | 4 +-
.../pages/cloud/operator/customize-podspec.adoc | 2 +-
.../pages/cloud/operator/developing-workflows.adoc | 59 ++++++++++--
.../operator/install-serverless-operator.adoc | 52 +++--------
.../cloud/operator/referencing-resource-files.adoc | 20 ++--
.../pages/cloud/operator/supporting-services.adoc | 103 +++++++++++++++++++--
.../cloud/operator/workflow-status-conditions.adoc | 18 ++--
.../getting-started/preparing-environment.adoc | 5 +-
.../create-your-first-workflow-service.adoc | 2 +-
...ng-openapi-services-endpoints-with-quarkus.adoc | 2 +-
15 files changed, 222 insertions(+), 102 deletions(-)
diff --git a/serverlessworkflow/modules/ROOT/nav.adoc
b/serverlessworkflow/modules/ROOT/nav.adoc
index 36403b49b..3fcd0031b 100644
--- a/serverlessworkflow/modules/ROOT/nav.adoc
+++ b/serverlessworkflow/modules/ROOT/nav.adoc
@@ -84,7 +84,7 @@
*** xref:cloud/operator/using-persistence.adoc[Using Persistence]
*** xref:cloud/operator/configuring-knative-eventing-resources.adoc[Knative
Eventing]
*** xref:cloud/operator/known-issues.adoc[Roadmap and Known Issues]
-*** xref:cloud/operator/add-custom-ca-to-a-workflow-pod.adoc[Add A Custom CA
To A Workflow Pod]
+*** xref:cloud/operator/add-custom-ca-to-a-workflow-pod.adoc[Add Custom CA to
Workflow Pod]
* Integrations
** xref:integrations/core-concepts.adoc[]
* Job Service
diff --git
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/add-custom-ca-to-a-workflow-pod.adoc
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/add-custom-ca-to-a-workflow-pod.adoc
index 4cf8c9e07..c9e1d3084 100644
---
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/add-custom-ca-to-a-workflow-pod.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/add-custom-ca-to-a-workflow-pod.adoc
@@ -3,7 +3,7 @@
:keywords: kogito, sonataflow, workflow, serverless, operator, kubernetes,
minikube, openshift, containers
:keytool-docs:
https://docs.oracle.com/en/java/javase/21/docs/specs/man/keytool.html
-If you're working with containers running Java applications and need to add a
CA (Certificate Authority) certificate for secure communication, you can follow
these steps. This guide assumes you are familiar with containers and have basic
knowledge of working with YAML files.
+{product_name} applications are containers running Java. If you're working
with containers running Java applications and need to add a CA (Certificate
Authority) certificate for secure communication this guide will explain the
necesarry steps to setup CA for your workflow application. The guide assumes
you are familiar with containers and have basic knowledge of working with YAML
files.
:toc:
@@ -19,11 +19,11 @@ The containerized application may not know the CA
certificate in build time, so
Before proceeding, ensure you have the CA certificate file (in PEM format)
that you want to add to the Java container. If you don't have it, you may need
to obtain it from your system administrator or certificate provider.
-For this guide, we would take the k8s cluster root CA that is automatically
deployed into every container under
`/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`
+For this guide, we are using the k8s cluster root CA that is automatically
deployed into every container under
`/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`
=== Step 2: Prepare a trust store in an init-container
-Add or amend these volumes and init-container snippet to your pod spec or
`podTemplate` in a deployment:
+Add or amend these `volumes` and `init-container` snippet to your pod spec or
`podTemplate` in a deployment:
[source,yaml]
---
@@ -51,8 +51,7 @@ The default keystore under `$JAVA_HOME` is part of the
container image and is no
=== Step 3: Configure Java to load the new keystore
Here you can mount the new, modified `cacerts` into the default location where
the JVM looks.
-The `Main.java` example uses the standard HTTP client so alternatively you
could mount the `cacerts` to a different location and
-configure the Java runtime to load the new keystore with a
`-Djavax.net.ssl.trustStore` system property.
+The `Main.java` example uses the standard HTTP client so alternatively you
could mount the `cacerts` to a different location and configure the Java
runtime to load the new keystore with a `-Djavax.net.ssl.trustStore` system
property.
Note that libraries like RESTEasy don't respect that flag and may need to
programmatically set the trust store location.
[source,yaml]
@@ -185,7 +184,7 @@ spec:
== Additional Resources
-* Keytool documentation: {keytool-docs}
-* Dynamically Creating Java keystores OpenShift - Blog Post:
https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift#end_to_end_springboot_demo
+* link:keytool-docs[Keytool documentation]
+*
link:https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift#end_to_end_springboot_demo[Dynamically
Creating Java keystores OpenShift]
diff --git
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/build-and-deploy-workflows.adoc
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/build-and-deploy-workflows.adoc
index 0a310b983..f79108850 100644
---
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/build-and-deploy-workflows.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/build-and-deploy-workflows.adoc
@@ -16,25 +16,30 @@
:docker_doc_arg_url: https://docs.docker.com/engine/reference/builder/#arg
:quarkus_extensions_url: https://quarkus.io/extensions/
-This document describes how to build and deploy your workflow on a cluster
using the link:{kogito_serverless_operator_url}[{operator_name}] only by having
a `SonataFlow` custom resource.
+This document describes how to build and deploy your workflow on a cluster
using the link:{kogito_serverless_operator_url}[{operator_name}].
Every time you need to change the workflow definition the system will
(re)build a new immutable version of the workflow. If you're still in
development phase, please see the
xref:cloud/operator/developing-workflows.adoc[] guide.
[IMPORTANT]
====
-The build system implemented by the {operator_name} is not suitable for
complex production use cases. Consider using an external tool to build your
application such as Tekton and ArgoCD. The resulting image can then be deployed
with `SonataFlow` custom resource. See more at
xref:cloud/operator/customize-podspec.adoc#custom-image-default-container[Setting
a custom image in the default container] section in the
xref:cloud/operator/customize-podspec.adoc#custom-image-default-container[]
guide.
+The build system implemented by the {operator_name} is not suitable for
complex production use cases. Consider using an external tool to build your
application such as Tekton and ArgoCD. The resulting image can then be deployed
with `SonataFlow` custom resource. More details available at
xref:cloud/operator/customize-podspec.adoc#custom-image-default-container[Setting
a custom image in the default container] section of the
xref:cloud/operator/customize-podspec.adoc[] guide.
====
Follow the <<building-kubernetes, Kubernetes>> or <<building-openshift,
OpenShift>> sections of this document based on the cluster you wish to build
your workflows on.
.Prerequisites
* A Workflow definition.
-* The {operator_name} installed. See
xref:cloud/operator/install-serverless-operator.adoc[] guide
+* The {operator_name} installed. See
xref:cloud/operator/install-serverless-operator.adoc[] guide.
-[#configure-build-system]
+[[configure-workflow-build-system]]
== Configuring the build system
-The operator can build workflows on Kubernetes or OpenShift. On Kubernetes, it
uses link:{kaniko_url}[Kaniko] and on OpenShift a
link:{openshift_build_url}[standard BuildConfig]. The operator build system is
not tailored for advanced production use cases and you can do only a few
customizations.
+The operator can build workflows on Kubernetes or OpenShift. On Kubernetes, it
uses link:{kaniko_url}[Kaniko] and on OpenShift a
link:{openshift_build_url}[standard BuildConfig].
+
+[IMPORTANT]
+====
+The operator build system is not tailored for advanced production use cases
and you can do only a few customizations.
+====
=== Using another Workflow base builder image
@@ -52,7 +57,7 @@ kubectl patch sonataflowplatform <name> --patch 'spec:\n
build:\n config:
[#customize-base-build]
=== Customize the base build Dockerfile
-The operator uses the sonataflow-operator-builder-config `ConfigMap` in the
operator's installation namespace ({operator_installation_namespace}) to
configure and run the workflow build process.
+The operator uses the `ConfigMap` named `sonataflow-operator-builder-config`
in the operator's installation namespace ({operator_installation_namespace}) to
configure and run the workflow build process.
You can change the `Dockerfile` entry in this `ConfigMap` to tailor the
Dockerfile to your needs. Just be aware that this can break the build process.
.Example of the sonataflow-operator-builder-config `ConfigMap`
@@ -87,6 +92,7 @@ metadata:
The excerpt above is just an example. The current version might have a
slightly different version. Don't use this example in your installation.
====
+[[changing-sfplatform-resource-requirements]]
=== Changing resources requirements
You can create or edit a `SonataFlowPlatform` in the workflow namespace
specifying the link:{kubernetes_resource_management_url}[resources
requirements] for the internal builder pods:
@@ -138,6 +144,7 @@ spec:
This parameters will only apply to new build instances.
+[[passing-build-arguments-to-internal-workflow-builder]]
=== Passing arguments to the internal builder
You can pass build arguments (see link:{docker_doc_arg_url}[Dockerfile ARG])
to the `SonataFlowBuild` instance.
@@ -210,9 +217,10 @@ The table below lists the Dockerfile arguments available
in the default {operato
|MAVEN_ARGS_APPEND | Arguments passed to the maven build when the workflow
build is produced. | -Dkogito.persistence.type=jdbc
-Dquarkus.datasource.db-kind=postgresql
|===
+[[setting-env-variables-for-internal-workflow-builder]]
=== Setting environment variables in the internal builder
-You can set environment variables to the `SonataFlowBuild` internal builder
pod.
+You can set environment variables to the `SonataFlowBuild` internal builder
pod. This is useful in cases where you would like to influence only the build
of the workflow.
[IMPORTANT]
====
@@ -275,7 +283,7 @@ Since the `envs` attribute is an array of
link:{kubernetes_envvar_url}[Kubernete
On Minikube and Kubernetes only plain values, `ConfigMap` and `Secret` are
supported due to a restriction on the build system provided by these platforms.
====
-[#building-kubernetes]
+[[building-and-deploying-on-kubernetes]]
== Building on Kubernetes
[TIP]
diff --git
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/building-custom-images.adoc
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/building-custom-images.adoc
index f0b849349..61b579b28 100644
---
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/building-custom-images.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/building-custom-images.adoc
@@ -9,7 +9,7 @@
// NOTE: this guide can be expanded in the future to include prod images,
hence the file name
// please change the title section and rearrange the others once it's
done
-This document describes how to build a custom development image to use in
SonataFlow.
+This document describes how to build a custom development image to use in
{product_name}.
== The development mode image structure
@@ -95,7 +95,7 @@ The container exposes port 8080 by default. When running the
container locally,
Next, we mount a local volume to the container's application path. Any local
workflow definitions, specification files, or properties should be mounted to
`src/main/resources`. Alternatively, you can also mount custom Java files to
`src/main/java`.
-Finally, to use the new generated image with the dev profile you can see:
xref:cloud/operator/developing-workflows.adoc#_using_another_workflow_base_image[Using
another Workflow base image].
+Finally, to use the new generated image with the dev profile follow the
procedure at
xref:cloud/operator/developing-workflows.adoc#_using_another_workflow_base_image[Using
another Workflow base image] guide.
== Additional resources
diff --git
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/configuring-knative-eventing-resources.adoc
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/configuring-knative-eventing-resources.adoc
index 8a9f362c9..8ce4fa6bd 100644
---
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/configuring-knative-eventing-resources.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/configuring-knative-eventing-resources.adoc
@@ -6,11 +6,17 @@
This document describes how you can configure the workflows to let operator
create the Knative eventing resources on Kubernetes.
-{operator_name} can analyze the event definitions from the `spec.flow` and
create `SinkBinding`/`Trigger` based on the type of the event. Then the
workflow service can utilize them for event communications. The same purpose of
this feature in quarkus extension can be found
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc#ref-example-sw-event-definition-knative[here].
+{operator_name} can analyze the event definitions from the `spec.flow` and
create `SinkBinding`/`Trigger` based on the type of the event. Then the
workflow service can utilize them for event communications.
+
+[NOTE]
+====
+ Alternativelly, you can follow our
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc#ref-example-sw-event-definition-knative[advanced
guide] that uses Java and Quarkus to introduce this feature.
+====
== Prerequisite
-1. Knative is installed on the cluster and Knative Eventing is initiated with
a `KnativeEventing` CR.
-2. A broker named `default` is created. Currently, all Triggers created by the
{operator_name} will read events from `default`
+1. The {operator_name} installed. See
xref:cloud/operator/install-serverless-operator.adoc[] guide.
+2. Knative is installed on the cluster and Knative Eventing is initiated with
a `KnativeEventing` CR.
+3. A broker named `default` is created. Currently, all Triggers created by the
{operator_name} will read events from `default`
== Configuring the workflow
@@ -52,7 +58,7 @@ Knative resources are not watched by the operator, indicating
they will not unde
== Additional resources
* https://knative.dev/docs/eventing/[Knative Eventing official site]
-*
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[quarkus
extension for Knative eventing]
+*
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[Quarkus
extension for Knative eventing]
*
xref:job-services/core-concepts.adoc#knative-eventing-supporting-resources[Knative
eventing for Job service]
* xref:data-index/data-index-core-concepts.adoc#_knative_eventing[Knative
eventing for data index]
diff --git
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/configuring-workflows.adoc
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/configuring-workflows.adoc
index 95bc97b8d..b4f57ef5f 100644
---
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/configuring-workflows.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/configuring-workflows.adoc
@@ -85,10 +85,10 @@ If you try to change any of them, the operator will
override them with the defau
== Additional resources
-* https://quarkus.io/guides/config-reference#profile-aware-files[Quarkus -
Profile aware files]
+* link:https://quarkus.io/guides/config-reference#profile-aware-files[Quarkus
Configuration Reference Guide - Profile aware files]
* xref:core/configuration-properties.adoc[]
-* xref:cloud/operator/known-issues.adoc[]
* xref:cloud/operator/developing-workflows.adoc[]
* xref:cloud/operator/build-and-deploy-workflows.adoc[]
+* xref:cloud/operator/known-issues.adoc[]
include::../../../pages/_common-content/report-issue.adoc[]
diff --git
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/customize-podspec.adoc
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/customize-podspec.adoc
index 3fd4dd3eb..9ae728467 100644
---
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/customize-podspec.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/customize-podspec.adoc
@@ -209,7 +209,7 @@ In this scenario, the `.spec.resources` attribute is
ignored since it's only use
xref:cloud/operator/known-issues.adoc[In the roadmap] you will find that we
plan to consider the `.spec.resources` attribute when the image is specified in
the default container.
====
-It's advised that the SonataFlow `.spec.flow` definition and the workflow
built within the image corresponds to the same workflow. If these definitions
don't match you may experience poorly management and configuration. The
{operator_name} uses the `.spec.flow` attribute to configure the application,
service discovery, and service binding with other deployments within the
topology.
+It's advised that the SonataFlow `.spec.flow` definition and the workflow
built within the image corresponds to the same workflow. If these definitions
don't match you may experience poor management and configuration. The
{operator_name} uses the `.spec.flow` attribute to configure the application,
service discovery, and service binding with other deployments within the
topology.
[IMPORTANT]
====
diff --git
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/developing-workflows.adoc
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/developing-workflows.adoc
index e8787c7a0..f232d49c0 100644
---
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/developing-workflows.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/developing-workflows.adoc
@@ -16,6 +16,11 @@ Workflows in the development profile are not tailored for
production environment
{operator_name} is under active development with features yet to be
implemented. Please see xref:cloud/operator/known-issues.adoc[].
====
+.Prerequisites
+* You have set up your environment according to the
xref:getting-started/preparing-environment.adoc#proc-minimal-local-environment-setup[minimal
environment setup] guide.
+* You have the cluster instance up and running. See
xref:getting-started/preparing-environment.adoc#proc-starting-cluster-fo-local-development[starting
the cluster for local development] guide.
+
+[[proc-introduction-to-development-profile]]
== Introduction to the Development Profile
The development profile is the easiest way to start playing around with
Workflows and the operator.
@@ -74,13 +79,13 @@ spec:
<2> In the `flow` attribute goes the Workflow definition as described by the
xref:core/cncf-serverless-workflow-specification-support.adoc[CNCF Serverless
Workflow specification]. So if you already have a workflow definition, you can
use it there. Alternatively, you can use the
xref:tooling/serverless-workflow-editor/swf-editor-overview.adoc[editors to
create your workflow definition].
+[[proc-deploying-new-workflow]]
== Deploying a New Workflow
.Prerequisites
-* You have xref:cloud/operator/install-serverless-operator.adoc[installed the
{operator_name}]
-* You have created a new {product_name} Kubernetes YAML file
+* You have a new {product_name} Kubernetes Workflow definition in YAML file.
You can use the Greeting example in
<<proc-introduction-to-development-profile,introduction to development
profile>> section.
-Having a new Kubernetes Workflow definition in a YAML file (you can use the
above example), you can deploy it in your cluster with the following command:
+Having a Kubernetes Workflow definition in a YAML file , you can deploy it in
your cluster with the following command:
.Deploying a new SonataFlow Custom Resource in Kubernetes
[source,bash,subs="attributes+"]
@@ -134,7 +139,7 @@ and changing the Workflow definition inside the Custom
Resource Spec section.
Alternatively, you can save the Custom Resource definition file and edit it
with your desired editor and re-apply it.
-For example using VS Code, there are the commands needed:
+For example using VS Code, these are the commands needed:
[source,bash,subs="attributes+"]
----
@@ -146,22 +151,58 @@ kubectl apply -f workflow_devmode.yaml -n <your_namespace>
The operator ensures that the latest Workflow definition is running and ready.
This way, you can include the Workflow in your development scenario and start
making requests to it.
+[[proc-check-if-workflow-is-running]]
== Check if the Workflow is running
+.Prerequisites
+* You have deployed a workflow to your cluster following the example in
<<proc-deploying-new-workflow,deploying new workflow>> section.
+
In order to check that the {product_name} Greeting workflow is up and running,
you can try to perform a test HTTP call. First, you must get the service URL:
-.Exposing the Workflow
-[source,bash,subs="attributes+"]
+. Exposing the workflow
+[tabs]
+====
+Minikube::
++
+--
+.Expose the workflow on minikube
+[source,shell]
----
+# Input
minikube service greeting -n <your_namespace> --url
+
+# Example output, use the URL as a base to acces the current workflow
http://127.0.0.1:57053
-# use the above output to get the current Workflow URL in your environment
+# Your workflow is accessible at http://127.0.0.1:57053/greeting
----
+--
+Kind::
++
+--
+.Expose the workflow on kind
+[source,shell]
+----
+# Find the service of your workflow
+kubectl get service -n <namespace>
+
+# Example output
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+greetings ClusterIP 10.96.0.1 <none> 31852/TCP 21h
+
+# Now forward the port and keep the terminal window open
+kubectl port-forward service/greeting 31852:80 -n <namespace>
+
+# Your workflow is accessible at localhost:31852/greetings
+----
+--
+====
[TIP]
====
-When running on Minikube, the service is already exposed for you via
`NodePort`. On OpenShift, link:{openshift_route_url}[a Route is automatically
created in devmode]. If you're running on Kubernetes you can
link:{kubernetes_url}[expose your service using an Ingress].
+* When running on Minikube, the service is already exposed for you via
`NodePort`.
+* On OpenShift, link:{openshift_route_url}[a Route is automatically created in
devmode].
+* If you're running on Kubernetes you can link:{kubernetes_url}[expose your
service using an Ingress].
====
You can now point your browser to the Swagger UI and start making requests
with the REST interface.
@@ -259,7 +300,7 @@ It can give you a clue about what might be happening. See
xref:cloud/operator/wo
.Watch the workflow logs
[source,shell,subs="attributes+"]
----
-kubectl logs deployment/<workflow-name> -f
+kubectl logs deployment/<workflow-name> -f -n <your_namespace>
----
+
If you decide to open an issue or ask for help in {product_name} communication
channels, this logging information is always useful for the person who will try
to help you.
diff --git
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/install-serverless-operator.adoc
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/install-serverless-operator.adoc
index b36115ff0..86c8d1f81 100644
---
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/install-serverless-operator.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/install-serverless-operator.adoc
@@ -14,11 +14,13 @@
This guide describes how to install the {operator_name} in a Kubernetes or
OpenShift cluster. The operator is in an
xref:cloud/operator/known-issues.adoc[early development stage] (community only)
and has been tested on OpenShift {openshift_version_min}+, Kubernetes
{kubernetes_version}+, and link:{minikube_url}[Minikube].
.Prerequisites
-* A Kubernetes or OpenShift cluster with admin privileges. Alternatively, you
can use Minikube or KIND.
-* `kubectl` command-line tool is installed. Otherwise, Minikube provides it.
+* A Kubernetes or OpenShift cluster with admin privileges and `kubectl`
installed.
+* Alternatively, you can use Minikube or KIND in your local environment. See
xref:getting-started/preparing-environment.adoc#proc-minimal-local-environment-setup[minimal
environment setup] and
xref:getting-started/preparing-environment.adoc#proc-starting-cluster-fo-local-development[starting
the cluster for local development] guides.
== {product_name} Operator OpenShift installation
+=== Install
+
To install the operator on OpenShift refer to the
"link:{openshift_operator_install_url}[Adding Operators to a cluster]" from the
OpenShift's documentation.
When searching for the operator in the *Filter by keyword* field, use the word
`{operator_openshift_keyword}`. If you're installing from the CLI, the
operator's catalog name is `{operator_openshift_catalog}`.
@@ -29,6 +31,8 @@ To remove the operator on OpenShift refer to the
"link:{openshift_operator_unins
== {product_name} Operator Kubernetes installation
+=== Install
+
To install the operator on Kubernetes refer to the
"link:{kubernetes_operator_install_url}[How to install an Operator from
OperatorHub.io]" from the OperatorHub's documentation.
When link:{operatorhub_url}[searching for the operator in the *Search
OperatorHub* field], use the word `{operator_k8s_keyword}`.
@@ -46,37 +50,11 @@ When searching for the subscription to remove, use the word
`{operator_k8s_subsc
If you're running on Kubernetes or OpenShift, it is highly recommended to
install the operator from the OperatorHub or OpenShift Console instead since
the installation is managed by OLM. Use this method only if you need a snapshot
version or you're running locally on Minikube or KIND.
====
-=== Prepare a Minikube instance
-
-[NOTE]
-====
-You can safely skip this section if you're not using Minikube.
-====
-
.Prerequisites
-* A machine with at least 8GB memory and a
link:https://en.wikipedia.org/wiki/Multi-core_processor[CPU with 8 cores]
-* Docker or Podman installed
-
-Run the following command to create a new instance capable of installing the
operator and deploy workflows:
-
-[source,shell,subs="attributes+"]
-----
-minikube start --cpus 4 --memory 4096 --addons registry --addons
metrics-server --insecure-registry "10.0.0.0/24" --insecure-registry
"localhost:5000"
-----
-
-[NOTE]
-====
-To speed up the build time, you can increase the CPUs and memory options so
that your Minikube instance will have more resources. For example, use `--cpus
12 --memory 16384`. If you have already created your Minikube instance, you
will need to recreate it for these changes to apply.
-====
-
-If Minikube does not work with the default driver, also known as `docker`, you
can try to start with the `podman` driver as follows:
-
-.Start Minikube with the Podman driver
-[source,shell,subs="attributes+"]
-----
-minikube start [...] --driver podman
-----
+* You have set up your environment according to the
xref:getting-started/preparing-environment.adoc#proc-minimal-local-environment-setup[minimal
environment setup] guide.
+* You have the cluster instance up and running. See
xref:getting-started/preparing-environment.adoc#proc-starting-cluster-fo-local-development[starting
the cluster for local development] guide.
+[[proc-install-serverless-operator-snapshot]]
=== Install
To install the {product_name} Operator, you can use the following command:
@@ -86,11 +64,11 @@ To install the {product_name} Operator, you can use the
following command:
----
kubectl create -f
https://raw.githubusercontent.com/apache/incubator-kie-kogito-serverless-operator/{operator_version}/operator.yaml
----
-You can also specify a version:
+Replace `main` with specific version if needed:
----
-kubectl create -f
https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/v<version>/operator.yaml
+kubectl create -f
https://raw.githubusercontent.com/kiegroup/kogito-serverless-operator/<version>/operator.yaml
----
-`<version>` could be `1.43.0` for instance.
+`<version>` could be `1.44.1` for instance.
You can follow the deployment of the {product_name} Operator:
@@ -144,7 +122,7 @@ To uninstall the correct version of the operator, first you
must get the current
----
kubectl get deployment sonataflow-operator-controller-manager -n
sonataflow-operator-system -o
jsonpath="{.spec.template.spec.containers[?(@.name=='manager')].image}"
-quay.io/kiegroup/kogito-serverless-operator-nightly:1.41.0
+quay.io/kiegroup/kogito-serverless-operator-nightly:latest
----
.Uninstalling the operator
@@ -155,9 +133,7 @@ kubectl delete -f
https://raw.githubusercontent.com/apache/incubator-kie-kogito-
[TIP]
====
-If you're running a snapshot version, use this URL instead
`https://raw.githubusercontent.com/apache/incubator-kie-kogito-serverless-operator/main/operator.yaml`.
-
-The URL should be the same used when installing the operator.
+The URL should be the same as the one you used when installing the operator.
====
== Additional resources
diff --git
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/referencing-resource-files.adoc
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/referencing-resource-files.adoc
index bdfcb4834..be56488cf 100644
---
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/referencing-resource-files.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/referencing-resource-files.adoc
@@ -14,15 +14,17 @@ For example, when doing
xref:service-orchestration/orchestration-of-openapi-base
If these files are not in a remote location that can be accessed via the HTTP
protocol, you must describe in the `SonataFlow` CR where to find them within
the cluster. This is done via link:{kubernetes_configmap_url}[`ConfigMaps`].
-== Creating ConfigMaps with Workflow Additional Files
+== Creating ConfigMaps with Workflow referencing additional files
.Prerequisites
-* You have the files available in your file system
-* You have permissions to create `ConfigMaps` in the target namespace
+* You have set up your environment according to the
xref:getting-started/preparing-environment.adoc#proc-minimal-local-environment-setup[minimal
environment setup] guide.
+* You have the cluster instance up and running. See
xref:getting-started/preparing-environment.adoc#proc-starting-cluster-fo-local-development[starting
the cluster for local development] guide.
+* You have permissions to create `ConfigMaps` in the target namespace of your
cluster.
+* (Optional) You have the files that you want to reference in your workflow
definition ready.
-Given that you already have the file you want to add to your workflow
definition, you link:{kubernetes_create_configmap_url}[can create a
`ConfigMap`] as you normally would with the contents of the file.
+If you already have the files referenced in your workflow definition, you
link:{kubernetes_create_configmap_url}[can create a `ConfigMap`] in your target
namespace with the contents of the file.
-For example, given the following workflow:
+In the example below, you need to use the contents of the
`specs/workflow-service-schema.json` file and
`specs/workflow-service-openapi.json` file to create the `ConfigMap`:
.Example of a workflow referencing additional files
[source,yaml,subs="attributes+"]
@@ -56,11 +58,11 @@ spec:
<1> The workflow defines an input schema
<2> The workflow requires an OpenAPI specification file to make a REST
invocation
-For this example, you have two options. You can either create two `ConfigMaps`
to have a clear separation of concerns or only one with both files.
+The `Hello Service` workflow in the example offers two options. You can either
create two `ConfigMaps`, each for one file, to have a clear separation of
concerns or group them into one.
From the operator perspective, it won't make any difference since both files
will be available for the workflow application at runtime.
-To make it simple, you can create only one `ConfigMap`. Given that the files
are available in the current directory:
+To make it simple, you can create only one `ConfigMap`. Navigate into the
directory where your resource files are available and create the config map
using following command:
.Creating a ConfigMap from the current directory
[source,bash,subs="attributes+"]
@@ -84,10 +86,12 @@ metadata:
name: service-files
data:
workflow-service-schema.json: # data was removed to save space
+ # <CONTENT OF THE FILE>
workflow-service-openapi.json: # data was removed to save space
+ # <CONTENT OF THE FILE>
----
-Now you can reference this `ConfigMap` to your `SonataFlow` CR:
+Now you can add reference to this `ConfigMap` into your `SonataFlow` CR:
.SonataFlow CR referencing a ConfigMap resource
[source,yaml,subs="attributes+"]
diff --git
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/supporting-services.adoc
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/supporting-services.adoc
index 1aacc7bb6..a4aef68d2 100644
---
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/supporting-services.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/supporting-services.adoc
@@ -6,7 +6,11 @@
// links
:kogito_serverless_operator_url:
https://github.com/apache/incubator-kie-kogito-serverless-operator/
-By default, workflows use an embedded version of
xref:data-index/data-index-core-concepts.adoc[Data Index]. This document
describes how to deploy supporting services, like Data Index, on a cluster
using the link:{kogito_serverless_operator_url}[{operator_name}].
+Under the hood, {operator_name} supports several services that enhance its
capabilities. For example
+xref:data-index/data-index-core-concepts.adoc[Data Index] or
xref:job-services/core-concepts.adoc[Job service].
+Please take a look at these guides to learn more about them.
+
+By default, workflows deployed by the operator use an embedded version of
xref:data-index/data-index-core-concepts.adoc[Data Index], however non-embedded
options are supported as well. This document describes how to deploy and
configure supporting services, like Data Index or Job Service, on a cluster
using the link:{kogito_serverless_operator_url}[{operator_name}].
[IMPORTANT]
====
@@ -15,14 +19,15 @@ By default, workflows use an embedded version of
xref:data-index/data-index-core
.Prerequisites
* The {operator_name} installed. See
xref:cloud/operator/install-serverless-operator.adoc[] guide
-* A postgresql database, if persistence is required
+* A postgresql database. Required if you are planning to use non-embedded
postgresql versions of supporting services. We recommend creating a postgresql
deployment in your cluster. Please note your credentials.
[#deploy-supporting-services]
== Deploy supporting services
+[#deploy-data-index-service]
=== Data Index
-You can deploy Data Index via `SonataFlowPlatform` configuration. The operator
will then configure all new workflows, with the "prod" profile, to use that
Data Index.
+You can deploy Data Index via `SonataFlowPlatform` configuration. The operator
will then configure all new workflows, with the "preview" or "gitops" profile,
to use that Data Index.
Following is a basic configuration. It will deploy an ephemeral Data Index to
the same namespace as the `SonataFlowPlatform`.
@@ -38,14 +43,13 @@ spec:
dataIndex: {}
----
-If you require Data Index persistence, this can be done with a `postgresql`
database.
-
-Following is a services configuration with the persistence option enabled.
You'll first need to create a secret with your database credentials.
+If your use case requires persistence, Data Index supports a `postgresql`
database.
+First, you need to create a secret with credentials to access your postgresql
deployment.
.Create a Secret for datasource authentication.
[source,bash,subs="attributes+"]
----
-kubectl create secret generic <creds-secret>
--from-literal=POSTGRESQL_USER=<user>
--from-literal=POSTGRESQL_PASSWORD=<password> -n workflows
+kubectl create secret generic <creds-secret>
--from-literal=POSTGRESQL_USER=<user>
--from-literal=POSTGRESQL_PASSWORD=<password>
--from-literal=POSTGRESQL_DATABASE=<db_name> -n workflows
----
.Example of a SonataFlowPlatform instance with a Data Index deployment
persisted to a postgresql database
@@ -92,18 +96,97 @@ spec:
image: <image:tag> <5>
----
-<1> Determines whether "prod" profile workflows should be configured to use
this service, defaults to `true`
+<1> Determines whether "preview" or "gitops" profile workflows should be
configured to use this service, defaults to `true`
<2> Secret key of your postgresql credentials user, defaults to
`POSTGRESQL_USER`
<3> PostgreSql JDBC URL
<4> Number of Data Index pods, defaults to `1`
-<5> Custom Data Index container image name
+<5> Custom Data Index container image name, if customization is required
+
+
+[#deploy-job-service]
+=== Job Service
+
+You can deploy Job Service via `SonataFlowPlatform` configuration. The
operator will then configure all new workflows, with the "preview" profile, to
use that Job Service.
+
+Following is a basic configuration. It will deploy an ephemeral Job Service to
the same namespace as the `SonataFlowPlatform`.
+
+.Example of a SonataFlowPlatform instance with an ephemeral Job Service
deployment
+[source,yaml,subs="attributes+"]
+----
+apiVersion: sonataflow.org/v1alpha08
+kind: SonataFlowPlatform
+metadata:
+ name: sonataflow-platform
+spec:
+ services:
+ jobService: {}
+----
+
+If your use case requires persistence, Job Service supports a `postgresql`
database.
+First, you need to create a secret with credentials to access your postgresql
deployment.
+
+.Create a Secret for datasource authentication.
+[source,bash,subs="attributes+"]
+----
+kubectl create secret generic <creds-secret>
--from-literal=POSTGRESQL_USER=<user>
--from-literal=POSTGRESQL_PASSWORD=<password>
--from-literal=POSTGRESQL_DATABASE=<db_name> -n workflows
+----
+
+.Example of a SonataFlowPlatform instance with a Job Service deployment
persisted to a postgresql database
+[source,yaml,subs="attributes+"]
+----
+apiVersion: sonataflow.org/v1alpha08
+kind: SonataFlowPlatform
+metadata:
+ name: sonataflow-platform
+spec:
+ services:
+ jobService:
+ persistence:
+ postgresql:
+ secretRef:
+ name: <creds-secret> <1>
+ serviceRef:
+ name: <postgresql-service> <2>
+----
+
+<1> Name of your postgresql credentials secret
+<2> Name of your postgresql k8s service
+
+.Example of a SonataFlowPlatform instance with a persisted Job Service
deployment and custom pod configuration
+[source,yaml,subs="attributes+"]
+----
+apiVersion: sonataflow.org/v1alpha08
+kind: SonataFlowPlatform
+metadata:
+ name: sonataflow-platform
+spec:
+ services:
+ jobService:
+ enabled: false <1>
+ persistence:
+ postgresql:
+ secretRef:
+ name: <creds-secret>
+ userKey: <secret-user-key> <2>
+ jdbcUrl:
"jdbc:postgresql://host:port/database?currentSchema=data-index-service" <3>
+ podTemplate:
+ replicas: 1 <4>
+ container:
+ image: <image:tag> <5>
+----
+
+<1> Determines whether "preview" or "gitops" profile workflows should be
configured to use this service, defaults to `true`
+<2> Secret key of your postgresql credentials user, defaults to
`POSTGRESQL_USER`
+<3> PostgreSql JDBC URL
+<4> Number of Job Service pods, defaults to `1`
+<5> Custom Job Service container image name, if customization is required
[#cluster-wide-services]
== Cluster-Wide Supporting Services
The `SonataFlowClusterPlatform` CR is optionally used to specify a
cluster-wide set of supporting services for workflow consumption. This is done
by referencing an existing, namespaced `SonataFlowPlatform` resource.
-Following is a basic configuration. It will allow workflows cluster-wide to
leverage whatever supporting services are configured in the chosen "central"
namespace.
+Following is a basic configuration that allows workflows, deployed in any
namespace, to leverage supporting services configured in the chosen "central"
namespace.
.Example of a basic SonataFlowClusterPlatform CR
[source,yaml,subs="attributes+"]
diff --git
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/workflow-status-conditions.adoc
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/workflow-status-conditions.adoc
index 36f90a779..f3fc0f34a 100644
---
a/serverlessworkflow/modules/ROOT/pages/cloud/operator/workflow-status-conditions.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/cloud/operator/workflow-status-conditions.adoc
@@ -8,7 +8,7 @@ This document describes the Status and Conditions of a
`SonataFlow` object manag
link:https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#typical-status-properties[Kubernetes
Status] is an important property to observe in order to understand what is
currently happening with the object. It can also help you troubleshoot or
integrate with other objects in the cluster.
-You can inspect the Status of any Workflow object using the following command:
+You can inspect the `Status` of any workflow object using the following
command:
.Checking the Workflow Status
[source,bash,subs="attributes+"]
@@ -18,7 +18,7 @@ kubectl get workflow <your_workflow_name> -n <your_namespace>
-o jsonpath={.stat
== General Status
-The table below lists the general structure of a Workflow status:
+The table below lists the general structure of a workflow status:
.Description of SonataFlow Status object
[cols="1,2"]
@@ -43,11 +43,11 @@ The `Conditions` property might vary depending on the
Workflow profile. The next
== Development Profile Conditions
-When you deploy a Workflow with the
xref:cloud/operator/developing-workflows.adoc[development profile], the
operator deploys a ready-to-use container with a running Workflow instance.
+When you deploy a workflow with the
xref:cloud/operator/developing-workflows.adoc[development profile], the
operator deploys a ready-to-use container with a running workflow instance.
-The following table lists the possible Conditions.
+The following table lists the possible `Conditions`.
-.Conditions Scenarios in Development
+.Conditions Scenarios in Development mode
[cols="0,0,1,2"]
|===
|Condition | Status | Reason | Description
@@ -84,13 +84,13 @@ The following table lists the possible Conditions.
|===
-In normal conditions, the Workflow will transition from `Running`,
`WaitingForDeployment` condition to `Running`. In case something wrong happens,
consult the section
xref:cloud/operator/developing-workflows.adoc#troubleshooting[Workflow
Troubleshooting in Development].
+In normal conditions, the Workflow will transition from `Running` to
`WaitingForDeployment`and to `Running` condition. In case something wrong
happens, consult the section
xref:cloud/operator/developing-workflows.adoc#troubleshooting[Workflow
Troubleshooting in development mode].
-== Production Profile Conditions
+== Preview Profile Conditions
-Deploying the Workflow in
xref:cloud/operator/build-and-deploy-workflows.adoc[Production profile] makes
the operator build an immutable image for the Workflow application. The
progress of the immutable image build can be followed by observing the Workflow
Conditions.
+Deploying the Workflow in
xref:cloud/operator/build-and-deploy-workflows.adoc[preview profile] makes the
operator build an immutable image for the Workflow application. The progress of
the immutable image build can be followed by observing the Workflow Conditions.
-.Condition Scenarios in Production
+.Condition Scenarios in Preview mode
[cols="0,0,1,2"]
|===
|Condition | Status | Reason | Description
diff --git
a/serverlessworkflow/modules/ROOT/pages/getting-started/preparing-environment.adoc
b/serverlessworkflow/modules/ROOT/pages/getting-started/preparing-environment.adoc
index 116498fa3..2f685cd3e 100644
---
a/serverlessworkflow/modules/ROOT/pages/getting-started/preparing-environment.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/getting-started/preparing-environment.adoc
@@ -3,6 +3,9 @@
This guide lists the different ways to set up your environment for
{product_name} development.
If you are new, start with the minimal one.
+.Prerequisites
+* A machine with at least 8GB memory and a
link:https://en.wikipedia.org/wiki/Multi-core_processor[CPU with 8 cores]
+
[[proc-minimal-local-environment-setup]]
== Minimal local environment setup
@@ -14,7 +17,7 @@ start the development on your local machine using our guides.
. Install link:{minikube_start_url}[minikube] or link:{kind_install_url}[kind].
. Install link:{kubectl_install_url}[Kubernetes CLI].
. Install link:{knative_quickstart_url}[Knative using quickstart]. This will
also set up Knative Serving and Eventing for you and the cluster should be
running.
-. xref:cloud/operator/install-serverless-operator.adoc[]
+. Install the
xref:cloud/operator/install-serverless-operator.adoc#_sonataflow_operator_manual_installation[{operator_name}
manually].
. Install
xref:testing-and-troubleshooting/kn-plugin-workflow-overview.adoc[Knative
Workflow CLI].
. Install link:{visual_studio_code_url}[Visual Studio Code] with
link:{visual_studio_code_swf_extension_url}[our extension] that simplifies
development of workflows by providing visual aids and auto-complete features.
diff --git
a/serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc
b/serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc
index d47ce363f..c3b0a6f97 100644
---
a/serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc
@@ -292,7 +292,7 @@ __ ____ __ _____ ___ __ ____ ______
2022-05-25 14:38:13,375 INFO
[org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer]
(docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,105 INFO
[org.kie.kog.per.pro.ProtobufService] (main) Registering Kogito ProtoBuffer
file: kogito-index.proto
2022-05-25 14:38:13,377 INFO
[org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer]
(docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,132 INFO
[org.kie.kog.per.pro.ProtobufService] (main) Registering Kogito ProtoBuffer
file: kogito-types.proto
2022-05-25 14:38:13,378 INFO
[org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer]
(docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,181 INFO
[io.quarkus] (main) data-index-service-inmemory 1.22.0.Final on JVM (powered by
Quarkus 2.9.0.Final) started in 4.691s. Listening on: http://0.0.0.0:8080
-2022-05-25 14:38:13,379 INFO
[org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer]
(docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,182 INFO
[io.quarkus] (main) Profile prod activated.
+2022-05-25 14:38:13,379 INFO
[org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer]
(docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,182 INFO
[io.quarkus] (main) Profile preview activated.
2022-05-25 14:38:13,380 INFO
[org.kie.kog.qua.pro.dev.DataIndexInMemoryContainer]
(docker-java-stream--938264210) STDOUT: 2022-05-25 17:38:13,182 INFO
[io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm,
hibernate-orm-panache, inmemory-postgres, jdbc-postgresql, narayana-jta, oidc,
reactive-routes, rest-client-reactive, rest-client-reactive-jackson, security,
smallrye-context-propagation, smallrye-graphql-client, smallrye-health,
smallrye-metrics, smallrye-reactive-mess [...]
----
diff --git
a/serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/service-orchestration/configuring-openapi-services-endpoints-with-quarkus.adoc
b/serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/service-orchestration/configuring-openapi-services-endpoints-with-quarkus.adoc
index 3c76b43c7..82ddd6833 100644
---
a/serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/service-orchestration/configuring-openapi-services-endpoints-with-quarkus.adoc
+++
b/serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/service-orchestration/configuring-openapi-services-endpoints-with-quarkus.adoc
@@ -89,7 +89,7 @@ To set properties for different profiles, each property needs
to be prefixed wit
* `dev`: Activates in development mode, such as `quarkus:dev`
* `test`: Activates when tests are running
-* `prod` (default profile): Activates when not running in development or test
mode
+* `preview` (default profile): Activates when not running in development or
test mode
You can also create additional profiles and activate them using the
`quarkus.profile` configuration property. For more information about Quarkus
profiles, see link:{quarkus_guides_profiles_url}[Profiles] in the Quarkus
Configuration reference guide.
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]