domhanak commented on code in PR #604:
URL:
https://github.com/apache/incubator-kie-kogito-docs/pull/604#discussion_r1539140150
##########
serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc:
##########
@@ -0,0 +1,187 @@
+= Creating a Quarkus Workflow Project
+
+As a developer, you can use {product_name} to create an application and in
this guide we want to explore different options and provide an overview of
available tools that can help in that purpose.
+
+We will also use Quarkus dev mode for iterative development and testing.
+
+As a common application development, we have different phases: Analysis,
Development and Deployment. Let's explore in detail each phase and what
{product_name} provides in each case:
+
+* <<proc-analysis-phase,Analysis and taking decisions phase>>
+** <<proc-adding-persistence,Adding persistence?>>
+** <<proc-adding-eventing,Adding eventing?>>
+** <<proc-adding-data-index-service,Adding Data Index service?>>
+** <<proc-adding-job-service,Adding Job service?>>
+
+* <<proc-development-phase,Development phase>>
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui, Refine your workflow testing with Dev-UI>>
+* <<proc-deployment-phase,Deployment phase>>
+
+
+.Prerequisites
+* You have setup your environment according to the
xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced
environment setup] guide.
+
+For more information about the tooling and the required dependencies, see
xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar
with {product_name} tooling].
+
+ifeval::["{kogito_version_redhat}" != ""]
+include::../../pages/_common-content/downstream-project-setup-instructions.adoc[]
+endif::[]
+
+
+[[proc-analysis-phase]]
+== Analysis phase
+
+Start by analyzing the requirements for your {product_name} application. This
will enable you to make decisions about persistence, eventing, security,
topology and component interaction needs of your application.
+
+[[proc-adding-persistence]]
+=== Adding persistence?
+Service orchestration is a relevant use case regarding the rise of
microservices and event driven architectures. These architectures focus on
communication between services and there is always the need to coordinate that
communication without the persistence addition requirement.
+
+{product_name} applications use an in-memory persistence by default. This
makes all the {workflow_instance} information volatile upon runtime restarts.
In case of this guide, when the workflow runtime is restarted.
+As a developer, you need to decide if there is a need to ensure that your
workflow instances remain consistent in the context.
+
+If your application requires persistence, you need to decide what kind of
persistence is needed and configure it properly.
+Follow the {product_name}
xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[persistence
guide] for more information.
+
+You can find more information about how to create an application which writes
to and reads from a database following
link:https://quarkus.io/guides/getting-started-dev-services[Your second Quarkus
application] guide.
+
+[[proc-adding-eventing]]
+=== Adding eventing?
+
+Quarkus unifies reactive and imperative programming you can find more
information about this in the
link:https://quarkus.io/guides/quarkus-reactive-architecture[Quarkus Reactive
Architecture] guide.
+
+In this phase we need to decide how the Event-Driven Architecture needs to be
added to our project.
+As an event-driven architecture, it uses events to trigger and communicate
between services. It allows decoupled applications to asynchronously publish
and subscribe to events through an event broker. The event-driven architecture
is a method of developing systems that allows information to flow in real time
between applications, microservices, and connected devices.
+
+This means that applications and devices do not need to know where they are
sending information or where the information they are consuming comes from.
+
+If we choose to add eventing, {product_name} supports different options like:
+
+* *Kafka Connector* for Reactive Messaging. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[]
for more details.
+* *Knative* eventing. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]
for more details.
+
+We need to choose how the different project components will communicate and
what kind of communication is needed. More details about
link:https://quarkus.io/guides/quarkus-reactive-architecture#quarkus-extensions-enabling-reactive[Quarkus
Extensions enabling Reactive]
+
+[[proc-adding-data-index-service]]
+=== Adding Data Index service?
+
+The decision is if we need the {data_index_ref} service enabled to be able to
index the {workflow_instance} information in order to be consumed from
{product_name} tooling or just through a GraphQl endpoint.
+
+The {data_index_ref} service is available for this purpose. See
xref:data-index/data-index-core-concepts.adoc[] for more details.
+
+Once we decide we want to index the data, we need to select how to integrate
the service in our topology. Having different options like:
+
+* We can choose to have the data indexation service integrated directly in our
application using the different
xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[].
Review Comment:
```suggestion
* You can choose to have the data indexation service integrated directly in
our application using the different
xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[].
```
##########
serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc:
##########
@@ -0,0 +1,187 @@
+= Creating a Quarkus Workflow Project
+
+As a developer, you can use {product_name} to create an application and in
this guide we want to explore different options and provide an overview of
available tools that can help in that purpose.
+
+We will also use Quarkus dev mode for iterative development and testing.
+
+As a common application development, we have different phases: Analysis,
Development and Deployment. Let's explore in detail each phase and what
{product_name} provides in each case:
+
+* <<proc-analysis-phase,Analysis and taking decisions phase>>
+** <<proc-adding-persistence,Adding persistence?>>
+** <<proc-adding-eventing,Adding eventing?>>
+** <<proc-adding-data-index-service,Adding Data Index service?>>
+** <<proc-adding-job-service,Adding Job service?>>
+
+* <<proc-development-phase,Development phase>>
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui, Refine your workflow testing with Dev-UI>>
+* <<proc-deployment-phase,Deployment phase>>
+
+
+.Prerequisites
+* You have setup your environment according to the
xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced
environment setup] guide.
+
+For more information about the tooling and the required dependencies, see
xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar
with {product_name} tooling].
+
+ifeval::["{kogito_version_redhat}" != ""]
+include::../../pages/_common-content/downstream-project-setup-instructions.adoc[]
+endif::[]
+
+
+[[proc-analysis-phase]]
+== Analysis phase
+
+Start by analyzing the requirements for your {product_name} application. This
will enable you to make decisions about persistence, eventing, security,
topology and component interaction needs of your application.
+
+[[proc-adding-persistence]]
+=== Adding persistence?
+Service orchestration is a relevant use case regarding the rise of
microservices and event driven architectures. These architectures focus on
communication between services and there is always the need to coordinate that
communication without the persistence addition requirement.
+
+{product_name} applications use an in-memory persistence by default. This
makes all the {workflow_instance} information volatile upon runtime restarts.
In case of this guide, when the workflow runtime is restarted.
+As a developer, you need to decide if there is a need to ensure that your
workflow instances remain consistent in the context.
+
+If your application requires persistence, you need to decide what kind of
persistence is needed and configure it properly.
+Follow the {product_name}
xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[persistence
guide] for more information.
+
+You can find more information about how to create an application which writes
to and reads from a database following
link:https://quarkus.io/guides/getting-started-dev-services[Your second Quarkus
application] guide.
+
+[[proc-adding-eventing]]
+=== Adding eventing?
+
+Quarkus unifies reactive and imperative programming you can find more
information about this in the
link:https://quarkus.io/guides/quarkus-reactive-architecture[Quarkus Reactive
Architecture] guide.
+
+In this phase we need to decide how the Event-Driven Architecture needs to be
added to our project.
+As an event-driven architecture, it uses events to trigger and communicate
between services. It allows decoupled applications to asynchronously publish
and subscribe to events through an event broker. The event-driven architecture
is a method of developing systems that allows information to flow in real time
between applications, microservices, and connected devices.
+
+This means that applications and devices do not need to know where they are
sending information or where the information they are consuming comes from.
+
+If we choose to add eventing, {product_name} supports different options like:
+
+* *Kafka Connector* for Reactive Messaging. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[]
for more details.
+* *Knative* eventing. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]
for more details.
+
+We need to choose how the different project components will communicate and
what kind of communication is needed. More details about
link:https://quarkus.io/guides/quarkus-reactive-architecture#quarkus-extensions-enabling-reactive[Quarkus
Extensions enabling Reactive]
+
+[[proc-adding-data-index-service]]
+=== Adding Data Index service?
+
+The decision is if we need the {data_index_ref} service enabled to be able to
index the {workflow_instance} information in order to be consumed from
{product_name} tooling or just through a GraphQl endpoint.
+
+The {data_index_ref} service is available for this purpose. See
xref:data-index/data-index-core-concepts.adoc[] for more details.
+
+Once we decide we want to index the data, we need to select how to integrate
the service in our topology. Having different options like:
+
+* We can choose to have the data indexation service integrated directly in our
application using the different
xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[].
+This allows just using the same Datasource as the application persistence
without any extra service deployment.
+** *{data_index_ref} persistence extension*. That persists the indexed data
directly at the application Data source.
+** *{data_index_ref} extension*. Allow to Persist directly the indexed data at
the application Data source and also provide the GraphQL endpoint to interact
with the persisted data.
+* Another option is to have the Data Index as a standalone service, we need to
properly configure the communication between our new application and the
service. More details in xref:data-index/data-index-service.adoc[]
+
+
+[[proc-adding-job-service]]
+=== Adding Job service?
+
+The Job Service facilitates the scheduled execution of tasks in a cloud
environment. If any of our {product_name} workflow needs some kind of temporary
schedule, we will need to integrate the Job service.
+
+Once we decide we want a Job Service, we need to select how to integrate the
service in our topology. Having different options like:
Review Comment:
```suggestion
If you decide to use Job Service, you need to select how to integrate the
service in your topology. Here are some options:
```
##########
serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc:
##########
@@ -0,0 +1,187 @@
+= Creating a Quarkus Workflow Project
+
+As a developer, you can use {product_name} to create an application and in
this guide we want to explore different options and provide an overview of
available tools that can help in that purpose.
+
+We will also use Quarkus dev mode for iterative development and testing.
+
+As a common application development, we have different phases: Analysis,
Development and Deployment. Let's explore in detail each phase and what
{product_name} provides in each case:
+
+* <<proc-analysis-phase,Analysis and taking decisions phase>>
+** <<proc-adding-persistence,Adding persistence?>>
+** <<proc-adding-eventing,Adding eventing?>>
+** <<proc-adding-data-index-service,Adding Data Index service?>>
+** <<proc-adding-job-service,Adding Job service?>>
+
+* <<proc-development-phase,Development phase>>
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui, Refine your workflow testing with Dev-UI>>
+* <<proc-deployment-phase,Deployment phase>>
+
+
+.Prerequisites
+* You have setup your environment according to the
xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced
environment setup] guide.
+
+For more information about the tooling and the required dependencies, see
xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar
with {product_name} tooling].
+
+ifeval::["{kogito_version_redhat}" != ""]
+include::../../pages/_common-content/downstream-project-setup-instructions.adoc[]
+endif::[]
+
+
+[[proc-analysis-phase]]
+== Analysis phase
+
+Start by analyzing the requirements for your {product_name} application. This
will enable you to make decisions about persistence, eventing, security,
topology and component interaction needs of your application.
+
+[[proc-adding-persistence]]
+=== Adding persistence?
+Service orchestration is a relevant use case regarding the rise of
microservices and event driven architectures. These architectures focus on
communication between services and there is always the need to coordinate that
communication without the persistence addition requirement.
+
+{product_name} applications use an in-memory persistence by default. This
makes all the {workflow_instance} information volatile upon runtime restarts.
In case of this guide, when the workflow runtime is restarted.
+As a developer, you need to decide if there is a need to ensure that your
workflow instances remain consistent in the context.
+
+If your application requires persistence, you need to decide what kind of
persistence is needed and configure it properly.
+Follow the {product_name}
xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[persistence
guide] for more information.
+
+You can find more information about how to create an application which writes
to and reads from a database following
link:https://quarkus.io/guides/getting-started-dev-services[Your second Quarkus
application] guide.
+
+[[proc-adding-eventing]]
+=== Adding eventing?
+
+Quarkus unifies reactive and imperative programming you can find more
information about this in the
link:https://quarkus.io/guides/quarkus-reactive-architecture[Quarkus Reactive
Architecture] guide.
+
+In this phase we need to decide how the Event-Driven Architecture needs to be
added to our project.
+As an event-driven architecture, it uses events to trigger and communicate
between services. It allows decoupled applications to asynchronously publish
and subscribe to events through an event broker. The event-driven architecture
is a method of developing systems that allows information to flow in real time
between applications, microservices, and connected devices.
+
+This means that applications and devices do not need to know where they are
sending information or where the information they are consuming comes from.
+
+If we choose to add eventing, {product_name} supports different options like:
+
+* *Kafka Connector* for Reactive Messaging. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[]
for more details.
+* *Knative* eventing. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]
for more details.
+
+We need to choose how the different project components will communicate and
what kind of communication is needed. More details about
link:https://quarkus.io/guides/quarkus-reactive-architecture#quarkus-extensions-enabling-reactive[Quarkus
Extensions enabling Reactive]
+
+[[proc-adding-data-index-service]]
+=== Adding Data Index service?
+
+The decision is if we need the {data_index_ref} service enabled to be able to
index the {workflow_instance} information in order to be consumed from
{product_name} tooling or just through a GraphQl endpoint.
+
+The {data_index_ref} service is available for this purpose. See
xref:data-index/data-index-core-concepts.adoc[] for more details.
+
+Once we decide we want to index the data, we need to select how to integrate
the service in our topology. Having different options like:
+
+* We can choose to have the data indexation service integrated directly in our
application using the different
xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[].
+This allows just using the same Datasource as the application persistence
without any extra service deployment.
+** *{data_index_ref} persistence extension*. That persists the indexed data
directly at the application Data source.
+** *{data_index_ref} extension*. Allow to Persist directly the indexed data at
the application Data source and also provide the GraphQL endpoint to interact
with the persisted data.
+* Another option is to have the Data Index as a standalone service, we need to
properly configure the communication between our new application and the
service. More details in xref:data-index/data-index-service.adoc[]
Review Comment:
```suggestion
This allows you to use the same data source as the application persistence
uses, without the need of extra service deployment.
** *{data_index_ref} persistence extension*. That persists the indexed data
directly at the application data source.
** *{data_index_ref} extension*. That persist directly the indexed data at
the application data source and also provide the GraphQL endpoint to interact
with the persisted data.
* Another option is to have the Data Index as a standalone service. In this
case, you need to properly configure the communication between your
{product_name} application and the {data_index_ref} service. More details in
xref:data-index/data-index-service.adoc[]
```
##########
serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc:
##########
@@ -0,0 +1,187 @@
+= Creating a Quarkus Workflow Project
+
+As a developer, you can use {product_name} to create an application and in
this guide we want to explore different options and provide an overview of
available tools that can help in that purpose.
+
+We will also use Quarkus dev mode for iterative development and testing.
+
+As a common application development, we have different phases: Analysis,
Development and Deployment. Let's explore in detail each phase and what
{product_name} provides in each case:
+
+* <<proc-analysis-phase,Analysis and taking decisions phase>>
+** <<proc-adding-persistence,Adding persistence?>>
+** <<proc-adding-eventing,Adding eventing?>>
+** <<proc-adding-data-index-service,Adding Data Index service?>>
+** <<proc-adding-job-service,Adding Job service?>>
+
+* <<proc-development-phase,Development phase>>
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui, Refine your workflow testing with Dev-UI>>
+* <<proc-deployment-phase,Deployment phase>>
+
+
+.Prerequisites
+* You have setup your environment according to the
xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced
environment setup] guide.
+
+For more information about the tooling and the required dependencies, see
xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar
with {product_name} tooling].
+
+ifeval::["{kogito_version_redhat}" != ""]
+include::../../pages/_common-content/downstream-project-setup-instructions.adoc[]
+endif::[]
+
+
+[[proc-analysis-phase]]
+== Analysis phase
+
+Start by analyzing the requirements for your {product_name} application. This
will enable you to make decisions about persistence, eventing, security,
topology and component interaction needs of your application.
+
+[[proc-adding-persistence]]
+=== Adding persistence?
+Service orchestration is a relevant use case regarding the rise of
microservices and event driven architectures. These architectures focus on
communication between services and there is always the need to coordinate that
communication without the persistence addition requirement.
+
+{product_name} applications use an in-memory persistence by default. This
makes all the {workflow_instance} information volatile upon runtime restarts.
In case of this guide, when the workflow runtime is restarted.
+As a developer, you need to decide if there is a need to ensure that your
workflow instances remain consistent in the context.
+
+If your application requires persistence, you need to decide what kind of
persistence is needed and configure it properly.
+Follow the {product_name}
xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[persistence
guide] for more information.
+
+You can find more information about how to create an application which writes
to and reads from a database following
link:https://quarkus.io/guides/getting-started-dev-services[Your second Quarkus
application] guide.
+
+[[proc-adding-eventing]]
+=== Adding eventing?
+
+Quarkus unifies reactive and imperative programming you can find more
information about this in the
link:https://quarkus.io/guides/quarkus-reactive-architecture[Quarkus Reactive
Architecture] guide.
+
+In this phase we need to decide how the Event-Driven Architecture needs to be
added to our project.
+As an event-driven architecture, it uses events to trigger and communicate
between services. It allows decoupled applications to asynchronously publish
and subscribe to events through an event broker. The event-driven architecture
is a method of developing systems that allows information to flow in real time
between applications, microservices, and connected devices.
+
+This means that applications and devices do not need to know where they are
sending information or where the information they are consuming comes from.
+
+If we choose to add eventing, {product_name} supports different options like:
+
+* *Kafka Connector* for Reactive Messaging. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[]
for more details.
+* *Knative* eventing. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]
for more details.
+
+We need to choose how the different project components will communicate and
what kind of communication is needed. More details about
link:https://quarkus.io/guides/quarkus-reactive-architecture#quarkus-extensions-enabling-reactive[Quarkus
Extensions enabling Reactive]
+
+[[proc-adding-data-index-service]]
+=== Adding Data Index service?
+
+The decision is if we need the {data_index_ref} service enabled to be able to
index the {workflow_instance} information in order to be consumed from
{product_name} tooling or just through a GraphQl endpoint.
+
+The {data_index_ref} service is available for this purpose. See
xref:data-index/data-index-core-concepts.adoc[] for more details.
+
+Once we decide we want to index the data, we need to select how to integrate
the service in our topology. Having different options like:
+
+* We can choose to have the data indexation service integrated directly in our
application using the different
xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[].
+This allows just using the same Datasource as the application persistence
without any extra service deployment.
+** *{data_index_ref} persistence extension*. That persists the indexed data
directly at the application Data source.
+** *{data_index_ref} extension*. Allow to Persist directly the indexed data at
the application Data source and also provide the GraphQL endpoint to interact
with the persisted data.
+* Another option is to have the Data Index as a standalone service, we need to
properly configure the communication between our new application and the
service. More details in xref:data-index/data-index-service.adoc[]
+
+
+[[proc-adding-job-service]]
+=== Adding Job service?
+
+The Job Service facilitates the scheduled execution of tasks in a cloud
environment. If any of our {product_name} workflow needs some kind of temporary
schedule, we will need to integrate the Job service.
+
+Once we decide we want a Job Service, we need to select how to integrate the
service in our topology. Having different options like:
+
+* We can choose to have the Job service integrated directly in our
{product_name} Quarkus application using
xref:use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc[].
+* Explore how to integrate the Job service and define the interaction with our
{product_name} application workflows. You can find more Job service related
details in xref:job-services/core-concepts.adoc[Job Service Core concepts]
+
+[[proc-development-phase]]
+== Development phase
+
+After taking some decisions about the components we need to integrate in our
project, we can jump into the workflow development phase.
+
+The goal is to create a workflow and be able to test and improve it.
{product_name} provides some tooling in order to facilitate the developer to
try the workflows during this development phase and refine them before going to
deployment phase.
+As an overview, we have the following resources to help in this development
phase:
Review Comment:
```suggestion
As an overview, you have the following resources to help in this development
phase:
```
##########
serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc:
##########
@@ -0,0 +1,187 @@
+= Creating a Quarkus Workflow Project
+
+As a developer, you can use {product_name} to create an application and in
this guide we want to explore different options and provide an overview of
available tools that can help in that purpose.
+
+We will also use Quarkus dev mode for iterative development and testing.
+
+As a common application development, we have different phases: Analysis,
Development and Deployment. Let's explore in detail each phase and what
{product_name} provides in each case:
+
+* <<proc-analysis-phase,Analysis and taking decisions phase>>
+** <<proc-adding-persistence,Adding persistence?>>
+** <<proc-adding-eventing,Adding eventing?>>
+** <<proc-adding-data-index-service,Adding Data Index service?>>
+** <<proc-adding-job-service,Adding Job service?>>
+
+* <<proc-development-phase,Development phase>>
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui, Refine your workflow testing with Dev-UI>>
+* <<proc-deployment-phase,Deployment phase>>
+
+
+.Prerequisites
+* You have setup your environment according to the
xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced
environment setup] guide.
+
+For more information about the tooling and the required dependencies, see
xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar
with {product_name} tooling].
+
+ifeval::["{kogito_version_redhat}" != ""]
+include::../../pages/_common-content/downstream-project-setup-instructions.adoc[]
+endif::[]
+
+
+[[proc-analysis-phase]]
+== Analysis phase
+
+Start by analyzing the requirements for your {product_name} application. This
will enable you to make decisions about persistence, eventing, security,
topology and component interaction needs of your application.
+
+[[proc-adding-persistence]]
+=== Adding persistence?
+Service orchestration is a relevant use case regarding the rise of
microservices and event driven architectures. These architectures focus on
communication between services and there is always the need to coordinate that
communication without the persistence addition requirement.
+
+{product_name} applications use an in-memory persistence by default. This
makes all the {workflow_instance} information volatile upon runtime restarts.
In case of this guide, when the workflow runtime is restarted.
+As a developer, you need to decide if there is a need to ensure that your
workflow instances remain consistent in the context.
+
+If your application requires persistence, you need to decide what kind of
persistence is needed and configure it properly.
+Follow the {product_name}
xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[persistence
guide] for more information.
+
+You can find more information about how to create an application which writes
to and reads from a database following
link:https://quarkus.io/guides/getting-started-dev-services[Your second Quarkus
application] guide.
+
+[[proc-adding-eventing]]
+=== Adding eventing?
+
+Quarkus unifies reactive and imperative programming you can find more
information about this in the
link:https://quarkus.io/guides/quarkus-reactive-architecture[Quarkus Reactive
Architecture] guide.
+
+In this phase we need to decide how the Event-Driven Architecture needs to be
added to our project.
+As an event-driven architecture, it uses events to trigger and communicate
between services. It allows decoupled applications to asynchronously publish
and subscribe to events through an event broker. The event-driven architecture
is a method of developing systems that allows information to flow in real time
between applications, microservices, and connected devices.
+
+This means that applications and devices do not need to know where they are
sending information or where the information they are consuming comes from.
+
+If we choose to add eventing, {product_name} supports different options like:
+
+* *Kafka Connector* for Reactive Messaging. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[]
for more details.
+* *Knative* eventing. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]
for more details.
+
+We need to choose how the different project components will communicate and
what kind of communication is needed. More details about
link:https://quarkus.io/guides/quarkus-reactive-architecture#quarkus-extensions-enabling-reactive[Quarkus
Extensions enabling Reactive]
+
+[[proc-adding-data-index-service]]
+=== Adding Data Index service?
+
+The decision is if we need the {data_index_ref} service enabled to be able to
index the {workflow_instance} information in order to be consumed from
{product_name} tooling or just through a GraphQl endpoint.
+
+The {data_index_ref} service is available for this purpose. See
xref:data-index/data-index-core-concepts.adoc[] for more details.
+
+Once we decide we want to index the data, we need to select how to integrate
the service in our topology. Having different options like:
+
+* We can choose to have the data indexation service integrated directly in our
application using the different
xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[].
+This allows just using the same Datasource as the application persistence
without any extra service deployment.
+** *{data_index_ref} persistence extension*. That persists the indexed data
directly at the application Data source.
+** *{data_index_ref} extension*. Allow to Persist directly the indexed data at
the application Data source and also provide the GraphQL endpoint to interact
with the persisted data.
+* Another option is to have the Data Index as a standalone service, we need to
properly configure the communication between our new application and the
service. More details in xref:data-index/data-index-service.adoc[]
+
+
+[[proc-adding-job-service]]
+=== Adding Job service?
+
+The Job Service facilitates the scheduled execution of tasks in a cloud
environment. If any of our {product_name} workflow needs some kind of temporary
schedule, we will need to integrate the Job service.
+
+Once we decide we want a Job Service, we need to select how to integrate the
service in our topology. Having different options like:
+
+* We can choose to have the Job service integrated directly in our
{product_name} Quarkus application using
xref:use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc[].
+* Explore how to integrate the Job service and define the interaction with our
{product_name} application workflows. You can find more Job service related
details in xref:job-services/core-concepts.adoc[Job Service Core concepts]
+
+[[proc-development-phase]]
+== Development phase
+
+After taking some decisions about the components we need to integrate in our
project, we can jump into the workflow development phase.
+
+The goal is to create a workflow and be able to test and improve it.
{product_name} provides some tooling in order to facilitate the developer to
try the workflows during this development phase and refine them before going to
deployment phase.
+As an overview, we have the following resources to help in this development
phase:
+
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui,Refine your workflow testing with Dev-UI>>
+
+[[proc-boostrapping-the-project]]
+=== Bootstrapping a project, Creating a workflow, Running your workflow
application and Testing your workflow application
+
+To create your workflow service, first you need to bootstrap a project.
+You can explore it in a detailed way following the {product_name}
xref:use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc[]
guide.
+
+[[proc-logging-configuration]]
+=== How to configure logging
+
+In order to understand what's happening in the environment. {product_name} is
using Quarkus Log Management. Logs can provide a detailed history of what
happened leading up to the issue.
+
+Quarkus uses the JBoss Log Manager logging backend for publishing application
and framework logs.
+Quarkus supports the JBoss Logging API and multiple other logging APIs,
seamlessly integrated with JBoss Log Manager
+In order to be able to see the in detail access to
link:{quarkus_guides_logging_url}[Quarkus Logging Configuration guide]
+
+.Example adding Logging configuration properties in `application.properties`
file
+[source,properties]
+----
+quarkus.log.console.enable=true <1>
+quarkus.log.level=INFO <2>
+quarkus.log.category."org.apache.kafka.clients".level=INFO
+quarkus.log.category."org.apache.kafka.common.utils".level=INFO <3>
+----
+<1> If console logging should be enabled, even by default is set to true
+<2> The log level of the root category, which is used as the default log level
for all categories
+<3> Logging is configured on a per-category basis, with each category being
configured independently. Configuration for a category applies recursively to
all subcategories unless there is a more specific subcategory configuration
+
+[NOTE]
+====
+Access to
link:{quarkus_guides_logging_url}#loggingConfigurationReference[Logging
configuration reference] to see how logs properties can be configured
+====
+
+[[proc-dev-ui]]
+=== Refine your workflow testing with Dev-UI
+
+Quarkus provides a host of features when dev mode is enabled allowing things
like:
+
+* *Change configuration values*.
+* *Running Development services*, including Zero-config setup of data sources.
When testing or running in dev mode Quarkus can even provide you with a zero
config database out of the box, a feature we refer to as Dev Services. More
information can be found in
link:{quarkus_guides_logging_url}#dev-services[Quarkus introduction to Dev
services].
+* *Access to Swagger-UI* that allows exploring the different {product_name}
application endpoints. The quarkus-smallrye-openapi extension will expose the
Swagger UI when Quarkus is running in dev mode. Additional information can be
found link:{quarkus_guides_swaggerui_url}#dev-mode[Use Swagger UI for
development].
+* *Data index Graph UI* that allows to perform GraphQL queries or to explore
the data schema
+* Allow to *explore the {workflow_instances}* if the {product_name} Runtime
tools Quarkus Dev UI is included
+
+[NOTE]
+====
+By default, Swagger UI is only available when Quarkus is started in dev or
test mode.
+
+If you want to make it available in production too, you can include the
following configuration in your application.properties:
+
+```
+quarkus.swagger-ui.always-include=true
+```
+This is a build time property, it cannot be changed at runtime after your
application is built.
+====
+
+[[proc-deployment-phase]]
+== Deployment phase
+
+At this stage we have a {product_name} Quarkus application well tested and
ready to be deployed.
Review Comment:
```suggestion
At this stage you have a {product_name} Quarkus application well tested and
ready to be deployed.
```
##########
serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc:
##########
@@ -0,0 +1,187 @@
+= Creating a Quarkus Workflow Project
+
+As a developer, you can use {product_name} to create an application and in
this guide we want to explore different options and provide an overview of
available tools that can help in that purpose.
+
+We will also use Quarkus dev mode for iterative development and testing.
+
+As a common application development, we have different phases: Analysis,
Development and Deployment. Let's explore in detail each phase and what
{product_name} provides in each case:
+
+* <<proc-analysis-phase,Analysis and taking decisions phase>>
+** <<proc-adding-persistence,Adding persistence?>>
+** <<proc-adding-eventing,Adding eventing?>>
+** <<proc-adding-data-index-service,Adding Data Index service?>>
+** <<proc-adding-job-service,Adding Job service?>>
+
+* <<proc-development-phase,Development phase>>
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui, Refine your workflow testing with Dev-UI>>
+* <<proc-deployment-phase,Deployment phase>>
+
+
+.Prerequisites
+* You have setup your environment according to the
xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced
environment setup] guide.
+
+For more information about the tooling and the required dependencies, see
xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar
with {product_name} tooling].
+
+ifeval::["{kogito_version_redhat}" != ""]
+include::../../pages/_common-content/downstream-project-setup-instructions.adoc[]
+endif::[]
+
+
+[[proc-analysis-phase]]
+== Analysis phase
+
+Start by analyzing the requirements for your {product_name} application. This
will enable you to make decisions about persistence, eventing, security,
topology and component interaction needs of your application.
+
+[[proc-adding-persistence]]
+=== Adding persistence?
+Service orchestration is a relevant use case regarding the rise of
microservices and event driven architectures. These architectures focus on
communication between services and there is always the need to coordinate that
communication without the persistence addition requirement.
+
+{product_name} applications use an in-memory persistence by default. This
makes all the {workflow_instance} information volatile upon runtime restarts.
In case of this guide, when the workflow runtime is restarted.
+As a developer, you need to decide if there is a need to ensure that your
workflow instances remain consistent in the context.
+
+If your application requires persistence, you need to decide what kind of
persistence is needed and configure it properly.
+Follow the {product_name}
xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[persistence
guide] for more information.
+
+You can find more information about how to create an application which writes
to and reads from a database following
link:https://quarkus.io/guides/getting-started-dev-services[Your second Quarkus
application] guide.
+
+[[proc-adding-eventing]]
+=== Adding eventing?
+
+Quarkus unifies reactive and imperative programming you can find more
information about this in the
link:https://quarkus.io/guides/quarkus-reactive-architecture[Quarkus Reactive
Architecture] guide.
+
+In this phase we need to decide how the Event-Driven Architecture needs to be
added to our project.
+As an event-driven architecture, it uses events to trigger and communicate
between services. It allows decoupled applications to asynchronously publish
and subscribe to events through an event broker. The event-driven architecture
is a method of developing systems that allows information to flow in real time
between applications, microservices, and connected devices.
+
+This means that applications and devices do not need to know where they are
sending information or where the information they are consuming comes from.
+
+If we choose to add eventing, {product_name} supports different options like:
+
+* *Kafka Connector* for Reactive Messaging. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[]
for more details.
+* *Knative* eventing. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]
for more details.
+
+We need to choose how the different project components will communicate and
what kind of communication is needed. More details about
link:https://quarkus.io/guides/quarkus-reactive-architecture#quarkus-extensions-enabling-reactive[Quarkus
Extensions enabling Reactive]
+
+[[proc-adding-data-index-service]]
+=== Adding Data Index service?
+
+The decision is if we need the {data_index_ref} service enabled to be able to
index the {workflow_instance} information in order to be consumed from
{product_name} tooling or just through a GraphQl endpoint.
+
+The {data_index_ref} service is available for this purpose. See
xref:data-index/data-index-core-concepts.adoc[] for more details.
+
+Once we decide we want to index the data, we need to select how to integrate
the service in our topology. Having different options like:
+
+* We can choose to have the data indexation service integrated directly in our
application using the different
xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[].
+This allows just using the same Datasource as the application persistence
without any extra service deployment.
+** *{data_index_ref} persistence extension*. That persists the indexed data
directly at the application Data source.
+** *{data_index_ref} extension*. Allow to Persist directly the indexed data at
the application Data source and also provide the GraphQL endpoint to interact
with the persisted data.
+* Another option is to have the Data Index as a standalone service, we need to
properly configure the communication between our new application and the
service. More details in xref:data-index/data-index-service.adoc[]
+
+
+[[proc-adding-job-service]]
+=== Adding Job service?
+
+The Job Service facilitates the scheduled execution of tasks in a cloud
environment. If any of our {product_name} workflow needs some kind of temporary
schedule, we will need to integrate the Job service.
+
+Once we decide we want a Job Service, we need to select how to integrate the
service in our topology. Having different options like:
+
+* We can choose to have the Job service integrated directly in our
{product_name} Quarkus application using
xref:use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc[].
+* Explore how to integrate the Job service and define the interaction with our
{product_name} application workflows. You can find more Job service related
details in xref:job-services/core-concepts.adoc[Job Service Core concepts]
Review Comment:
```suggestion
* You can choose to have the Job service integrated directly in your
{product_name} Quarkus application using
xref:use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc[]
guide.
* Explore how to integrate the Job service and define the interaction with
your {product_name} application workflows. You can find more Job service
related details in xref:job-services/core-concepts.adoc[Job Service Core
concepts].
```
##########
serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc:
##########
@@ -0,0 +1,187 @@
+= Creating a Quarkus Workflow Project
+
+As a developer, you can use {product_name} to create an application and in
this guide we want to explore different options and provide an overview of
available tools that can help in that purpose.
+
+We will also use Quarkus dev mode for iterative development and testing.
+
+As a common application development, we have different phases: Analysis,
Development and Deployment. Let's explore in detail each phase and what
{product_name} provides in each case:
+
+* <<proc-analysis-phase,Analysis and taking decisions phase>>
+** <<proc-adding-persistence,Adding persistence?>>
+** <<proc-adding-eventing,Adding eventing?>>
+** <<proc-adding-data-index-service,Adding Data Index service?>>
+** <<proc-adding-job-service,Adding Job service?>>
+
+* <<proc-development-phase,Development phase>>
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui, Refine your workflow testing with Dev-UI>>
+* <<proc-deployment-phase,Deployment phase>>
+
+
+.Prerequisites
+* You have setup your environment according to the
xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced
environment setup] guide.
+
+For more information about the tooling and the required dependencies, see
xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar
with {product_name} tooling].
+
+ifeval::["{kogito_version_redhat}" != ""]
+include::../../pages/_common-content/downstream-project-setup-instructions.adoc[]
+endif::[]
+
+
+[[proc-analysis-phase]]
+== Analysis phase
+
+Start by analyzing the requirements for your {product_name} application. This
will enable you to make decisions about persistence, eventing, security,
topology and component interaction needs of your application.
+
+[[proc-adding-persistence]]
+=== Adding persistence?
+Service orchestration is a relevant use case regarding the rise of
microservices and event driven architectures. These architectures focus on
communication between services and there is always the need to coordinate that
communication without the persistence addition requirement.
+
+{product_name} applications use an in-memory persistence by default. This
makes all the {workflow_instance} information volatile upon runtime restarts.
In case of this guide, when the workflow runtime is restarted.
+As a developer, you need to decide if there is a need to ensure that your
workflow instances remain consistent in the context.
+
+If your application requires persistence, you need to decide what kind of
persistence is needed and configure it properly.
+Follow the {product_name}
xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[persistence
guide] for more information.
+
+You can find more information about how to create an application which writes
to and reads from a database following
link:https://quarkus.io/guides/getting-started-dev-services[Your second Quarkus
application] guide.
+
+[[proc-adding-eventing]]
+=== Adding eventing?
+
+Quarkus unifies reactive and imperative programming you can find more
information about this in the
link:https://quarkus.io/guides/quarkus-reactive-architecture[Quarkus Reactive
Architecture] guide.
+
+In this phase we need to decide how the Event-Driven Architecture needs to be
added to our project.
+As an event-driven architecture, it uses events to trigger and communicate
between services. It allows decoupled applications to asynchronously publish
and subscribe to events through an event broker. The event-driven architecture
is a method of developing systems that allows information to flow in real time
between applications, microservices, and connected devices.
+
+This means that applications and devices do not need to know where they are
sending information or where the information they are consuming comes from.
+
+If we choose to add eventing, {product_name} supports different options like:
+
+* *Kafka Connector* for Reactive Messaging. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[]
for more details.
+* *Knative* eventing. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]
for more details.
+
+We need to choose how the different project components will communicate and
what kind of communication is needed. More details about
link:https://quarkus.io/guides/quarkus-reactive-architecture#quarkus-extensions-enabling-reactive[Quarkus
Extensions enabling Reactive]
+
+[[proc-adding-data-index-service]]
+=== Adding Data Index service?
+
+The decision is if we need the {data_index_ref} service enabled to be able to
index the {workflow_instance} information in order to be consumed from
{product_name} tooling or just through a GraphQl endpoint.
+
+The {data_index_ref} service is available for this purpose. See
xref:data-index/data-index-core-concepts.adoc[] for more details.
+
+Once we decide we want to index the data, we need to select how to integrate
the service in our topology. Having different options like:
Review Comment:
```suggestion
The {data_index_ref} service is able to index the {workflow_instance}
information using GraphQL. This is very useful if you want to consume the
workflow data in different applications through a GraphQL endpoint.
For more information about {data_index_ref} service see
xref:data-index/data-index-core-concepts.adoc[] for more details.
If you decide to index the data, you need to select how to integrate the
{data_index_ref} service in your topology. Here are some options:
```
##########
serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc:
##########
@@ -0,0 +1,187 @@
+= Creating a Quarkus Workflow Project
+
+As a developer, you can use {product_name} to create an application and in
this guide we want to explore different options and provide an overview of
available tools that can help in that purpose.
+
+We will also use Quarkus dev mode for iterative development and testing.
+
+As a common application development, we have different phases: Analysis,
Development and Deployment. Let's explore in detail each phase and what
{product_name} provides in each case:
+
+* <<proc-analysis-phase,Analysis and taking decisions phase>>
+** <<proc-adding-persistence,Adding persistence?>>
+** <<proc-adding-eventing,Adding eventing?>>
+** <<proc-adding-data-index-service,Adding Data Index service?>>
+** <<proc-adding-job-service,Adding Job service?>>
+
+* <<proc-development-phase,Development phase>>
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui, Refine your workflow testing with Dev-UI>>
+* <<proc-deployment-phase,Deployment phase>>
+
+
+.Prerequisites
+* You have setup your environment according to the
xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced
environment setup] guide.
+
+For more information about the tooling and the required dependencies, see
xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar
with {product_name} tooling].
+
+ifeval::["{kogito_version_redhat}" != ""]
+include::../../pages/_common-content/downstream-project-setup-instructions.adoc[]
+endif::[]
+
+
+[[proc-analysis-phase]]
+== Analysis phase
+
+Start by analyzing the requirements for your {product_name} application. This
will enable you to make decisions about persistence, eventing, security,
topology and component interaction needs of your application.
+
+[[proc-adding-persistence]]
+=== Adding persistence?
+Service orchestration is a relevant use case regarding the rise of
microservices and event driven architectures. These architectures focus on
communication between services and there is always the need to coordinate that
communication without the persistence addition requirement.
+
+{product_name} applications use an in-memory persistence by default. This
makes all the {workflow_instance} information volatile upon runtime restarts.
In case of this guide, when the workflow runtime is restarted.
+As a developer, you need to decide if there is a need to ensure that your
workflow instances remain consistent in the context.
+
+If your application requires persistence, you need to decide what kind of
persistence is needed and configure it properly.
+Follow the {product_name}
xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[persistence
guide] for more information.
+
+You can find more information about how to create an application which writes
to and reads from a database following
link:https://quarkus.io/guides/getting-started-dev-services[Your second Quarkus
application] guide.
+
+[[proc-adding-eventing]]
+=== Adding eventing?
+
+Quarkus unifies reactive and imperative programming you can find more
information about this in the
link:https://quarkus.io/guides/quarkus-reactive-architecture[Quarkus Reactive
Architecture] guide.
+
+In this phase we need to decide how the Event-Driven Architecture needs to be
added to our project.
+As an event-driven architecture, it uses events to trigger and communicate
between services. It allows decoupled applications to asynchronously publish
and subscribe to events through an event broker. The event-driven architecture
is a method of developing systems that allows information to flow in real time
between applications, microservices, and connected devices.
+
+This means that applications and devices do not need to know where they are
sending information or where the information they are consuming comes from.
+
+If we choose to add eventing, {product_name} supports different options like:
+
+* *Kafka Connector* for Reactive Messaging. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[]
for more details.
+* *Knative* eventing. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]
for more details.
+
+We need to choose how the different project components will communicate and
what kind of communication is needed. More details about
link:https://quarkus.io/guides/quarkus-reactive-architecture#quarkus-extensions-enabling-reactive[Quarkus
Extensions enabling Reactive]
+
+[[proc-adding-data-index-service]]
+=== Adding Data Index service?
+
+The decision is if we need the {data_index_ref} service enabled to be able to
index the {workflow_instance} information in order to be consumed from
{product_name} tooling or just through a GraphQl endpoint.
+
+The {data_index_ref} service is available for this purpose. See
xref:data-index/data-index-core-concepts.adoc[] for more details.
+
+Once we decide we want to index the data, we need to select how to integrate
the service in our topology. Having different options like:
+
+* We can choose to have the data indexation service integrated directly in our
application using the different
xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[].
+This allows just using the same Datasource as the application persistence
without any extra service deployment.
+** *{data_index_ref} persistence extension*. That persists the indexed data
directly at the application Data source.
+** *{data_index_ref} extension*. Allow to Persist directly the indexed data at
the application Data source and also provide the GraphQL endpoint to interact
with the persisted data.
+* Another option is to have the Data Index as a standalone service, we need to
properly configure the communication between our new application and the
service. More details in xref:data-index/data-index-service.adoc[]
+
+
+[[proc-adding-job-service]]
+=== Adding Job service?
+
+The Job Service facilitates the scheduled execution of tasks in a cloud
environment. If any of our {product_name} workflow needs some kind of temporary
schedule, we will need to integrate the Job service.
Review Comment:
```suggestion
The Job Service facilitates the scheduled execution of tasks in a cloud
environment. If any of your {product_name} workflow needs some kind of
temporary schedule, you will need to integrate the Job service.
```
##########
serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc:
##########
@@ -0,0 +1,187 @@
+= Creating a Quarkus Workflow Project
+
+As a developer, you can use {product_name} to create an application and in
this guide we want to explore different options and provide an overview of
available tools that can help in that purpose.
+
+We will also use Quarkus dev mode for iterative development and testing.
+
+As a common application development, we have different phases: Analysis,
Development and Deployment. Let's explore in detail each phase and what
{product_name} provides in each case:
+
+* <<proc-analysis-phase,Analysis and taking decisions phase>>
+** <<proc-adding-persistence,Adding persistence?>>
+** <<proc-adding-eventing,Adding eventing?>>
+** <<proc-adding-data-index-service,Adding Data Index service?>>
+** <<proc-adding-job-service,Adding Job service?>>
+
+* <<proc-development-phase,Development phase>>
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui, Refine your workflow testing with Dev-UI>>
+* <<proc-deployment-phase,Deployment phase>>
+
+
+.Prerequisites
+* You have setup your environment according to the
xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced
environment setup] guide.
+
+For more information about the tooling and the required dependencies, see
xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar
with {product_name} tooling].
+
+ifeval::["{kogito_version_redhat}" != ""]
+include::../../pages/_common-content/downstream-project-setup-instructions.adoc[]
+endif::[]
+
+
+[[proc-analysis-phase]]
+== Analysis phase
+
+Start by analyzing the requirements for your {product_name} application. This
will enable you to make decisions about persistence, eventing, security,
topology and component interaction needs of your application.
+
+[[proc-adding-persistence]]
+=== Adding persistence?
+Service orchestration is a relevant use case regarding the rise of
microservices and event driven architectures. These architectures focus on
communication between services and there is always the need to coordinate that
communication without the persistence addition requirement.
+
+{product_name} applications use an in-memory persistence by default. This
makes all the {workflow_instance} information volatile upon runtime restarts.
In case of this guide, when the workflow runtime is restarted.
+As a developer, you need to decide if there is a need to ensure that your
workflow instances remain consistent in the context.
+
+If your application requires persistence, you need to decide what kind of
persistence is needed and configure it properly.
+Follow the {product_name}
xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[persistence
guide] for more information.
+
+You can find more information about how to create an application which writes
to and reads from a database following
link:https://quarkus.io/guides/getting-started-dev-services[Your second Quarkus
application] guide.
+
+[[proc-adding-eventing]]
+=== Adding eventing?
+
+Quarkus unifies reactive and imperative programming you can find more
information about this in the
link:https://quarkus.io/guides/quarkus-reactive-architecture[Quarkus Reactive
Architecture] guide.
+
+In this phase we need to decide how the Event-Driven Architecture needs to be
added to our project.
+As an event-driven architecture, it uses events to trigger and communicate
between services. It allows decoupled applications to asynchronously publish
and subscribe to events through an event broker. The event-driven architecture
is a method of developing systems that allows information to flow in real time
between applications, microservices, and connected devices.
+
+This means that applications and devices do not need to know where they are
sending information or where the information they are consuming comes from.
+
+If we choose to add eventing, {product_name} supports different options like:
+
+* *Kafka Connector* for Reactive Messaging. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[]
for more details.
+* *Knative* eventing. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]
for more details.
+
+We need to choose how the different project components will communicate and
what kind of communication is needed. More details about
link:https://quarkus.io/guides/quarkus-reactive-architecture#quarkus-extensions-enabling-reactive[Quarkus
Extensions enabling Reactive]
+
+[[proc-adding-data-index-service]]
+=== Adding Data Index service?
+
+The decision is if we need the {data_index_ref} service enabled to be able to
index the {workflow_instance} information in order to be consumed from
{product_name} tooling or just through a GraphQl endpoint.
+
+The {data_index_ref} service is available for this purpose. See
xref:data-index/data-index-core-concepts.adoc[] for more details.
+
+Once we decide we want to index the data, we need to select how to integrate
the service in our topology. Having different options like:
+
+* We can choose to have the data indexation service integrated directly in our
application using the different
xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[].
+This allows just using the same Datasource as the application persistence
without any extra service deployment.
+** *{data_index_ref} persistence extension*. That persists the indexed data
directly at the application Data source.
+** *{data_index_ref} extension*. Allow to Persist directly the indexed data at
the application Data source and also provide the GraphQL endpoint to interact
with the persisted data.
+* Another option is to have the Data Index as a standalone service, we need to
properly configure the communication between our new application and the
service. More details in xref:data-index/data-index-service.adoc[]
+
+
+[[proc-adding-job-service]]
+=== Adding Job service?
+
+The Job Service facilitates the scheduled execution of tasks in a cloud
environment. If any of our {product_name} workflow needs some kind of temporary
schedule, we will need to integrate the Job service.
+
+Once we decide we want a Job Service, we need to select how to integrate the
service in our topology. Having different options like:
+
+* We can choose to have the Job service integrated directly in our
{product_name} Quarkus application using
xref:use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc[].
+* Explore how to integrate the Job service and define the interaction with our
{product_name} application workflows. You can find more Job service related
details in xref:job-services/core-concepts.adoc[Job Service Core concepts]
+
+[[proc-development-phase]]
+== Development phase
+
+After taking some decisions about the components we need to integrate in our
project, we can jump into the workflow development phase.
Review Comment:
```suggestion
Once you decided which components you need to integrate into {product_name}
project, you can jump into the workflow development phase.
```
##########
serverlessworkflow/modules/ROOT/pages/use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-project.adoc:
##########
@@ -0,0 +1,187 @@
+= Creating a Quarkus Workflow Project
+
+As a developer, you can use {product_name} to create an application and in
this guide we want to explore different options and provide an overview of
available tools that can help in that purpose.
+
+We will also use Quarkus dev mode for iterative development and testing.
+
+As a common application development, we have different phases: Analysis,
Development and Deployment. Let's explore in detail each phase and what
{product_name} provides in each case:
+
+* <<proc-analysis-phase,Analysis and taking decisions phase>>
+** <<proc-adding-persistence,Adding persistence?>>
+** <<proc-adding-eventing,Adding eventing?>>
+** <<proc-adding-data-index-service,Adding Data Index service?>>
+** <<proc-adding-job-service,Adding Job service?>>
+
+* <<proc-development-phase,Development phase>>
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui, Refine your workflow testing with Dev-UI>>
+* <<proc-deployment-phase,Deployment phase>>
+
+
+.Prerequisites
+* You have setup your environment according to the
xref:getting-started/preparing-environment.adoc#proc-advanced-local-environment-setup[advanced
environment setup] guide.
+
+For more information about the tooling and the required dependencies, see
xref:getting-started/getting-familiar-with-our-tooling.adoc[Getting familiar
with {product_name} tooling].
+
+ifeval::["{kogito_version_redhat}" != ""]
+include::../../pages/_common-content/downstream-project-setup-instructions.adoc[]
+endif::[]
+
+
+[[proc-analysis-phase]]
+== Analysis phase
+
+Start by analyzing the requirements for your {product_name} application. This
will enable you to make decisions about persistence, eventing, security,
topology and component interaction needs of your application.
+
+[[proc-adding-persistence]]
+=== Adding persistence?
+Service orchestration is a relevant use case regarding the rise of
microservices and event driven architectures. These architectures focus on
communication between services and there is always the need to coordinate that
communication without the persistence addition requirement.
+
+{product_name} applications use an in-memory persistence by default. This
makes all the {workflow_instance} information volatile upon runtime restarts.
In case of this guide, when the workflow runtime is restarted.
+As a developer, you need to decide if there is a need to ensure that your
workflow instances remain consistent in the context.
+
+If your application requires persistence, you need to decide what kind of
persistence is needed and configure it properly.
+Follow the {product_name}
xref:use-cases/advanced-developer-use-cases/persistence/persistence-core-concepts.adoc[persistence
guide] for more information.
+
+You can find more information about how to create an application which writes
to and reads from a database following
link:https://quarkus.io/guides/getting-started-dev-services[Your second Quarkus
application] guide.
+
+[[proc-adding-eventing]]
+=== Adding eventing?
+
+Quarkus unifies reactive and imperative programming you can find more
information about this in the
link:https://quarkus.io/guides/quarkus-reactive-architecture[Quarkus Reactive
Architecture] guide.
+
+In this phase we need to decide how the Event-Driven Architecture needs to be
added to our project.
+As an event-driven architecture, it uses events to trigger and communicate
between services. It allows decoupled applications to asynchronously publish
and subscribe to events through an event broker. The event-driven architecture
is a method of developing systems that allows information to flow in real time
between applications, microservices, and connected devices.
+
+This means that applications and devices do not need to know where they are
sending information or where the information they are consuming comes from.
+
+If we choose to add eventing, {product_name} supports different options like:
+
+* *Kafka Connector* for Reactive Messaging. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-producing-events-with-kafka.adoc[]
for more details.
+* *Knative* eventing. See
xref:use-cases/advanced-developer-use-cases/event-orchestration/consume-produce-events-with-knative-eventing.adoc[]
for more details.
+
+We need to choose how the different project components will communicate and
what kind of communication is needed. More details about
link:https://quarkus.io/guides/quarkus-reactive-architecture#quarkus-extensions-enabling-reactive[Quarkus
Extensions enabling Reactive]
+
+[[proc-adding-data-index-service]]
+=== Adding Data Index service?
+
+The decision is if we need the {data_index_ref} service enabled to be able to
index the {workflow_instance} information in order to be consumed from
{product_name} tooling or just through a GraphQl endpoint.
+
+The {data_index_ref} service is available for this purpose. See
xref:data-index/data-index-core-concepts.adoc[] for more details.
+
+Once we decide we want to index the data, we need to select how to integrate
the service in our topology. Having different options like:
+
+* We can choose to have the data indexation service integrated directly in our
application using the different
xref:use-cases/advanced-developer-use-cases/data-index/data-index-quarkus-extension.adoc[].
+This allows just using the same Datasource as the application persistence
without any extra service deployment.
+** *{data_index_ref} persistence extension*. That persists the indexed data
directly at the application Data source.
+** *{data_index_ref} extension*. Allow to Persist directly the indexed data at
the application Data source and also provide the GraphQL endpoint to interact
with the persisted data.
+* Another option is to have the Data Index as a standalone service, we need to
properly configure the communication between our new application and the
service. More details in xref:data-index/data-index-service.adoc[]
+
+
+[[proc-adding-job-service]]
+=== Adding Job service?
+
+The Job Service facilitates the scheduled execution of tasks in a cloud
environment. If any of our {product_name} workflow needs some kind of temporary
schedule, we will need to integrate the Job service.
+
+Once we decide we want a Job Service, we need to select how to integrate the
service in our topology. Having different options like:
+
+* We can choose to have the Job service integrated directly in our
{product_name} Quarkus application using
xref:use-cases/advanced-developer-use-cases/job-service/quarkus-extensions.adoc[].
+* Explore how to integrate the Job service and define the interaction with our
{product_name} application workflows. You can find more Job service related
details in xref:job-services/core-concepts.adoc[Job Service Core concepts]
+
+[[proc-development-phase]]
+== Development phase
+
+After taking some decisions about the components we need to integrate in our
project, we can jump into the workflow development phase.
+
+The goal is to create a workflow and be able to test and improve it.
{product_name} provides some tooling in order to facilitate the developer to
try the workflows during this development phase and refine them before going to
deployment phase.
+As an overview, we have the following resources to help in this development
phase:
+
+** <<proc-boostrapping-the-project,Bootstrapping a project, Creating a
workflow, Running your workflow application and Testing your workflow
application >>
+** <<proc-logging-configuration,How to configure logging>>
+** <<proc-dev-ui,Refine your workflow testing with Dev-UI>>
+
+[[proc-boostrapping-the-project]]
+=== Bootstrapping a project, Creating a workflow, Running your workflow
application and Testing your workflow application
+
+To create your workflow service, first you need to bootstrap a project.
+You can explore it in a detailed way following the {product_name}
xref:use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc[]
guide.
Review Comment:
```suggestion
Follow the {product_name}
xref:use-cases/advanced-developer-use-cases/getting-started/create-your-first-workflow-service.adoc[]
guide to setup a minimal working project.
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]