This is an automated email from the ASF dual-hosted git repository.
shuber pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/unomi.git
The following commit(s) were added to refs/heads/master by this push:
new 9450e6122 [UNOMI-638] - Updates documentation in preparation for Unomi
2 (#472)
9450e6122 is described below
commit 9450e61222cda075511ea1e15d91548d4d2faff7
Author: Francois G <[email protected]>
AuthorDate: Mon Aug 29 08:08:26 2022 -0400
[UNOMI-638] - Updates documentation in preparation for Unomi 2 (#472)
* Initial commit
* Integrated gdoc
* Added draft GraphQL documentation
* Added details about Docker and Elasticsearch
* Continued updating getting started
* Some updates to ES versions
* Continued updates
* Intermediate commit
* Updated recipes
* Fixed missing url in example
* Intermediary commit
* Added more details about the migration
* Fixed formatting
* Fixed formatting
* Added instructions for debug
* Added details about not running migration across nodes
* Corrected wording
* Added details about modifying schema
* Added migration instructions
* Added details about Elasticsearch permissions
* Updated wording
* Some more updates
* Addressed comment
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/whats-new.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/whats-new.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/whats-new.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/whats-new.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/recipes.adoc
Co-authored-by: Serge Huber <[email protected]>
* Added link to JSON schema
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/recipes.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/recipes.adoc
Co-authored-by: Serge Huber <[email protected]>
* Update manual/src/main/asciidoc/recipes.adoc
Co-authored-by: Serge Huber <[email protected]>
Co-authored-by: Serge Huber <[email protected]>
---
manual/src/main/asciidoc/5-min-quickstart.adoc | 39 +++-
manual/src/main/asciidoc/configuration.adoc | 25 ++-
manual/src/main/asciidoc/getting-started.adoc | 8 +-
manual/src/main/asciidoc/graphql.adoc | 49 +++++
manual/src/main/asciidoc/index.adoc | 8 +-
.../src/main/asciidoc/jsonSchema/introduction.adoc | 2 +
.../asciidoc/migrations/migrate-1.4-to-1.5.adoc | 2 +-
.../asciidoc/migrations/migrate-1.5-to-1.6.adoc | 15 ++
.../asciidoc/migrations/migrate-1.6-to-2.0.adoc | 189 +++++++++++++++++
.../src/main/asciidoc/migrations/migrations.adoc | 8 +
manual/src/main/asciidoc/recipes.adoc | 224 +++++++++++++++++----
manual/src/main/asciidoc/request-examples.adoc | 4 +-
manual/src/main/asciidoc/whats-new.adoc | 110 ++++++++++
13 files changed, 622 insertions(+), 61 deletions(-)
diff --git a/manual/src/main/asciidoc/5-min-quickstart.adoc
b/manual/src/main/asciidoc/5-min-quickstart.adoc
index 380c3160e..feb6e26fd 100644
--- a/manual/src/main/asciidoc/5-min-quickstart.adoc
+++ b/manual/src/main/asciidoc/5-min-quickstart.adoc
@@ -11,12 +11,47 @@
// See the License for the specific language governing permissions and
// limitations under the License.
//
-=== Five Minutes QuickStart
+
+=== Quick start with Docker
+
+Begin by creating a `docker-compose.yml` file with the following content:
+
+[source]
+----
+version: '3.8'
+services:
+ elasticsearch:
+ image: docker.elastic.co/elasticsearch/elasticsearch:7.17.5
+ environment:
+ - discovery.type=single-node
+ ports:
+ - 9200:9200
+ unomi:
+ # Unomi version can be updated based on your needs
+ image: apache/unomi:2.0.0
+ environment:
+ - UNOMI_ELASTICSEARCH_ADDRESSES=elasticsearch:9200
+ - UNOMI_THIRDPARTY_PROVIDER1_IPADDRESSES=0.0.0.0/0,::1,127.0.0.1
+ ports:
+ - 8181:8181
+ - 9443:9443
+ - 8102:8102
+ links:
+ - elasticsearch
+ depends_on:
+ - elasticsearch
+----
+
+From the same folder, start the environment using `docker-compose up` and wait
for the startup to complete.
+
+Try accessing https://localhost:9443/cxs/cluster with username/password:
karaf/karaf . You might get a certificate warning in your browser, just accept
it despite the warning it is safe.
+
+=== Quick Start manually
1) Install JDK 8
(https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html)
and make sure you set the
JAVA_HOME variable
https://docs.oracle.com/cd/E19182-01/820-7851/inst_cli_jdk_javahome_t/ (see our
<<JDK compatibility,Getting Started>> guide for more information on JDK
compatibility)
-2) Download ElasticSearch here :
https://www.elastic.co/downloads/past-releases/elasticsearch-7-4-2 (please
<strong>make sure</strong> you use the proper version : 7.4.2)
+2) Download ElasticSearch here :
https://www.elastic.co/downloads/past-releases/elasticsearch-7-17-5 (please
<strong>make sure</strong> you use the proper version : 7.17.5)
3) Uncompress it and change the `config/elasticsearch.yml` to include the
following config : <code>cluster.name: contextElasticSearch</code>
diff --git a/manual/src/main/asciidoc/configuration.adoc
b/manual/src/main/asciidoc/configuration.adoc
index eb611f301..e3c885cee 100644
--- a/manual/src/main/asciidoc/configuration.adoc
+++ b/manual/src/main/asciidoc/configuration.adoc
@@ -749,15 +749,11 @@ shell commands in the "Shell commands" section of the
documentation.
=== ElasticSearch authentication and security
-With ElasticSearch 7, it's possible to secure the access to your data.
(https://www.elastic.co/guide/en/elasticsearch/reference/7.5/secure-cluster.html[https://www.elastic.co/guide/en/elasticsearch/reference/7.5/secure-cluster.html])
-
-Depending on your ElasticSearch license you may need to install Kibana and
enable xpack security:
https://www.elastic.co/guide/en/elasticsearch/reference/7.5/configuring-security.html[https://www.elastic.co/guide/en/elasticsearch/reference/7.5/configuring-security.html]
+With ElasticSearch 7, it's possible to secure the access to your data. (see
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/configuring-stack-security.html[https://www.elastic.co/guide/en/elasticsearch/reference/7.17/configuring-stack-security.html]
and
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/secure-cluster.html[https://www.elastic.co/guide/en/elasticsearch/reference/7.17/secure-cluster.html])
==== User authentication !
-If your ElasticSearch have been configured to be only accessible by
authenticated users
(https://www.elastic.co/guide/en/elasticsearch/reference/7.5/setting-up-authentication.html[https://www.elastic.co/guide/en/elasticsearch/reference/7.5/setting-up-authentication.html])
-
-Just edit `etc/org.apache.unomi.persistence.elasticsearch.cfg` to add the
following settings:
+If your ElasticSearch have been configured to be only accessible by
authenticated users, edit `etc/org.apache.unomi.persistence.elasticsearch.cfg`
to add the following settings:
[source]
----
@@ -770,11 +766,7 @@ password=PASSWORD
By default Unomi will communicate with ElasticSearch using `http`
but you can configure your ElasticSearch server(s) to allow encrypted request
using `https`.
-You can follow this documentation to enable SSL on your ElasticSearch
server(s):
-
-*
https://www.elastic.co/guide/en/elasticsearch/reference/7.5/ssl-tls.html[Full
documentation]
-*
https://www.elastic.co/guide/en/elasticsearch/reference/7.5/configuring-tls.html#node-certificates[Configure
certificates]
-*
https://www.elastic.co/guide/en/elasticsearch/reference/7.5/configuring-tls.html#tls-http[Encrypt
HTTP communications]
+You can follow this documentation to enable SSL on your ElasticSearch
server(s):
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-basic-setup-https.html[https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-basic-setup-https.html]
If your ElasticSearch is correctly configure to encrypt communications on
`https`:
@@ -792,3 +784,14 @@ of the ElasticSearch server(s). But if you need to trust
all certificates automa
----
sslTrustAllCertificates=true
----
+
+==== Permissions
+
+Apache Unomi requires a particular set of Elasticsearch permissions for its
operation.
+
+If you are using Elasticsearch in a production environment, you will most
likely need to fine tune permissions given to the user used by Unomi.
+
+The following permissions are required by Unomi:
+
+ - required cluster privileges: `manage` OR `al`l
+ - required index privileges on unomi indices: ((`write`) AND (`manage`) AND
(`read`)) OR `all`
diff --git a/manual/src/main/asciidoc/getting-started.adoc
b/manual/src/main/asciidoc/getting-started.adoc
index 4c672323d..6ed56438a 100644
--- a/manual/src/main/asciidoc/getting-started.adoc
+++ b/manual/src/main/asciidoc/getting-started.adoc
@@ -34,17 +34,15 @@ them at your own risks.
===== ElasticSearch compatibility
-Starting with version 1.5.0 Apache Unomi adds compatibility with ElasticSearch
7.4 . It is highly recommended to use the
-ElasticSearch version provided by the documentation when possible. However
minor versions (7.4.x) should also work, and
-one version higher (7.5) will usually work. Going higher than that is risky
given the way that ElasticSearch is developed
-and breaking changes are introduced quite often. If in doubt, don't hesitate
to check with the Apache Unomi community
+Starting with version 2.0.0 Apache Unomi adds compatibility with ElasticSearch
7.17.5 . It is highly recommended to use the
+ElasticSearch version specified in the documentation whenever possible. If in
doubt, don't hesitate to check with the Apache Unomi community
to get the latest information about ElasticSearch version compatibility.
==== Running Unomi
===== Start Unomi
-Start Unomi according to the <<Five Minutes QuickStart,five minutes quick
start>> or by compiling using the
+Start Unomi according to the <<Five Minutes QuickStart,quick start with
docker>> or by compiling using the
<<Building,building instructions>>. Once you have Karaf running,
you should wait until you see the following messages on the Karaf console:
diff --git a/manual/src/main/asciidoc/graphql.adoc
b/manual/src/main/asciidoc/graphql.adoc
new file mode 100644
index 000000000..418a5fd05
--- /dev/null
+++ b/manual/src/main/asciidoc/graphql.adoc
@@ -0,0 +1,49 @@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+=== GraphQL API
+
+First introduced in Apache Unomi 2.0, a GraphQL API is available as an
alternative to REST for interacting with the platform.
+Disabled by default, the GraphQL API is currently considered a beta feature.
+
+We look forward for this new GraphQL API to be used, feel free to open
discussion on
+https://the-asf.slack.com/messages/CBP2Z98Q7/[Unomi Slack channel] or
https://issues.apache.org/jira/projects/UNOMI/issues[create tickets on Jira]
+
+==== Enabling/ the API
+
+The GraphQL API must be enabled using a system property (or environment
variable):
+
+[source]
+----
+# Extract from: etc/custom.system.properties
+#######################################################################################################################
+## Settings for GraphQL
##
+#######################################################################################################################
+org.apache.unomi.graphql.feature.activated=${env:UNOMI_GRAPHQL_FEATURE_ACTIVATED:-false}
+----
+
+You can either modify the `org.apache.unomi.graphql.feature.activated`
property or specify the `UNOMI_GRAPHQL_FEATURE_ACTIVATED`
+environment variable (if using Docker for example).
+
+==== Endpoints
+
+Two endpoints were introduced for Apache Unomi 2 GraphQL API:
+* `/graphql` is the primary endpoint for interacting programatically with the
API and aims at receiving POST requests.
+* `/graphql-ui` provides access to the GraphQL Playground UI and aims at being
accessed by a Web Browser.
+
+==== GraphQL Schema
+
+Thanks to GraphQL introspection, there is no dedicated documentation per-se as
the Schema itself serves as documentation.
+
+You can easily view the schema by navigrating to `/graphql-ui`, depending on
your setup (localhost, public host, ...),
+you might need to adjust the URL to point GraphQL Playground to the `/graphql`
endpoint.
diff --git a/manual/src/main/asciidoc/index.adoc
b/manual/src/main/asciidoc/index.adoc
index 111ba170e..dcebaf79c 100644
--- a/manual/src/main/asciidoc/index.adoc
+++ b/manual/src/main/asciidoc/index.adoc
@@ -12,7 +12,7 @@
// limitations under the License.
//
-= Apache Unomi 1.x - Documentation
+= Apache Unomi 2.x - Documentation
Apache Software Foundation
:doctype: article
:toc: left
@@ -24,6 +24,10 @@ Apache Software Foundation
image::asf_logo_url.png[pdfwidth=35%,align=center]
+== What's new
+
+include::whats-new.adoc[]
+
== Quick start
include::5-min-quickstart.adoc[]
@@ -40,6 +44,8 @@ include::configuration.adoc[]
include::jsonSchema/json-schema.adoc[]
+include::graphql.adoc[]
+
include::migrations/migrations.adoc[]
== Queries and aggregations
diff --git a/manual/src/main/asciidoc/jsonSchema/introduction.adoc
b/manual/src/main/asciidoc/jsonSchema/introduction.adoc
index ce4848000..e77d5ed4d 100644
--- a/manual/src/main/asciidoc/jsonSchema/introduction.adoc
+++ b/manual/src/main/asciidoc/jsonSchema/introduction.adoc
@@ -14,6 +14,8 @@
=== Introduction
+Introduced with Apache Unomi 2.0, JSON-Schema are used to validate data
submitted through all of the public (unprotected) API endpoints.
+
==== What is a JSON Schema
https://json-schema.org/specification.html[JSON Schema] is a powerful standard
for validating the structure of JSON data.
diff --git a/manual/src/main/asciidoc/migrations/migrate-1.4-to-1.5.adoc
b/manual/src/main/asciidoc/migrations/migrate-1.4-to-1.5.adoc
index 0bc191475..8d9337367 100644
--- a/manual/src/main/asciidoc/migrations/migrate-1.4-to-1.5.adoc
+++ b/manual/src/main/asciidoc/migrations/migrate-1.4-to-1.5.adoc
@@ -14,7 +14,7 @@
==== Data model and ElasticSearch 7
-Since Apache Unomi version 1.5.0 we decided to upgrade the supported
ElasticSearch version to the latest 7.4.2.
+Since Apache Unomi version 1.5.0 we decided to upgrade the supported
ElasticSearch version to the 7.4.2.
To be able to do so, we had to rework the way the data was stored inside
ElasticSearch.
diff --git a/manual/src/main/asciidoc/migrations/migrate-1.5-to-1.6.adoc
b/manual/src/main/asciidoc/migrations/migrate-1.5-to-1.6.adoc
new file mode 100644
index 000000000..1f56510a4
--- /dev/null
+++ b/manual/src/main/asciidoc/migrations/migrate-1.5-to-1.6.adoc
@@ -0,0 +1,15 @@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+
+Migration from Unomi 1.5x to 1.6x does not require any particular steps,
simply restart your cluster in the new version.
diff --git a/manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
b/manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
new file mode 100644
index 000000000..926026147
--- /dev/null
+++ b/manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
@@ -0,0 +1,189 @@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+
+=== Migration Overview
+
+Apache Unomi 2.0 is a major release, and as such it does introduce breaking
changes. This portion of the document detail the various steps we recommend
following to successfully migrate your environment from Apache Unomi 1.6 to
Apache Unomi 2.0.
+
+There are two main steps in preparing your migration to Apache Unomi 2.0:
+- Updating applications consuming Unomi
+- Migrating your existing data
+
+=== Updating applications consuming Unomi
+
+Since Apache Unomi is an engine, you've probably built multiple applications
consuming its APIs, you might also have built extensions directly running in
Unomi.
+
+As you begin updating applications consuming Apache Unomi, it is generally a
good practice to <<Enable debug mode>>.
+Doing so will display any errors when processing events (such as JSON Schema
validations), and will provide useful indications towards solving issues.
+
+==== Data Model changes
+
+There has been changes to Unomi Data model, please make sure to review those
in the << what_s_new>> section of the user manual.
+
+==== Create JSON schemas
+
+Once you updated your applications to align with Unomi 2 data model, the next
step will be to create the necessary JSON Schemas.
+
+Any event (and more generally, any object) received through Unomi public
endpoints do require a valid JSON schema.
+Apache Unomi ships, out of the box, with all of the necessary JSON Schemas for
its own operation but you will need to create schemas for any custom event you
may be using.
+
+When creating your new schemas, reviewing debug messages in the logs (using:
`log:set DEBUG org.apache.unomi.schema.impl.SchemaServiceImpl` in Karaf
console),
+will point to errors in your schemas or will help you diagnose why the events
are not being accepted.
+
+Note that it is currently not possible to modify or surcharge an existing
system-deployed JSON schema via the REST API. It is however possible to deploy
new schemas and manage them through the REST API on the `/cxs/jsonSchema`
endpoint.
+If you are currently using custom properties on an Apache Unomi-provided event
type,
+you will need to either change to use a new custom eventType and create the
corresponding schema or to create a Unomi schema extension. You can find more
details in the <<JSON schemas,JSON Schema>> section of this documentation.
+
+You can use, as a source of inspiration for creating new schemas, Apache Unomi
2.0 schema located at:
+
https://github.com/apache/unomi/tree/master/extensions/json-schema/services/src/main/resources/META-INF/cxs/schemas[extensions/json-schema/services/src/main/resources/META-INF/cxs/schemas].
+
+Finally, and although it is technically feasible, we recommend against
creating permissive JSON Schemas allowing any event payload. This requires
making sure that you don't allow open properties by using JSON schema keywords
such as
https://json-schema.org/understanding-json-schema/reference/object.html#unevaluated-properties[unevaluated
properties]
+
+=== Migrating your existing data
+
+==== Elasticsearch version and capacity
+
+While still using Unomi 1.6, the first step will be to upgrade your
Elasticsearch to 7.17.5.
+Documentation is available on
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/setup-upgrade.html[Elasticsearch's
website].
+
+Your Elasticsearch cluster must have enough capacity to handle the migration.
+At a minimum, the required capacity must be greater than the size of the
dataset in production + the size of the largest index.
+
+==== Migrate custom data
+
+Apache Unomi 2.0 knows how to migrate its own data from the new model to the
old one, but it does not know how to migrate custom events you might be using
in your environment.
+
+It relies on a set of groovy scripts to perform its data migration,
+located in
https://github.com/apache/unomi/tree/master/tools/shell-commands/src/main/resources/META-INF/cxs/migration[tools/shell-commands/src/main/resources/META-INF/cxs/migration],
+these scripts are sorted alphabetically and executed sequentially when
migration is started. You can use these scripts as a source of inspiration for
creating your own.
+
+In most cases, migration steps consist of an Elasticsearch painless script
that will handle the data changes.
+
+At runtime, and when starting the migration, Unomi 2.0 will take its own
scripts, any additional scripts located in `data/migration/scripts`, will sort
the resulting list alphabetically and execute each migration script
sequentially.
+
+==== Perform the migration
+
+===== Checklist
+
+Before starting the migration, please ensure that:
+
+ - You do have a backup of your data
+ - You did practice the migration in a staging environment, NEVER migrate a
production environment without prior validation
+ - You verified your applications were operational with Apache Unomi 2.0 (JSON
schemas created, client applications updated, ...)
+ - You are running Elasticsearch 7.17.5 (or a later 7.x version)
+ - Your Elasticsearch cluster has enough capacity to handle the migration
+ - You are currently running Apache Unomi 1.6 (or a later 1.x version)
+ - You will be using the same Apache Unomi instance for the entire migration
progress. Do not start the migration on one node, and resume an interrupted
migration on another node.
+
+===== Migration process overview
+
+The migration is performed by means of a dedicated Apache Unomi 2.0 node
started in a particular migration mode.
+
+In a nutshell, the migration process will consist in the following steps:
+
+- Shutdown your Apache Unomi 1.6 cluster
+- Start an Apache Unomi 2.0 migration node
+- Wait for data migration to complete
+- Start you Apache Unomi 2.0 cluster
+- (optional) Import additional JSON Schemas
+
+Each migration step maintains its execution state, meaning if a step fails you
can fix the issue, and resume the migration from the failed step.
+
+===== Configuration
+
+The following environment variables are used for the migration:
+
+|===
+|Environment Variable|Unomi Setting|Default
+
+|UNOMI_ELASTICSEARCH_ADDRESSES
+|org.apache.unomi.elasticsearch.addresses
+|localhost:9200
+
+|UNOMI_ELASTICSEARCH_SSL_ENABLE
+|org.apache.unomi.elasticsearch.sslEnable
+|false
+
+|UNOMI_ELASTICSEARCH_USERNAME
+|org.apache.unomi.elasticsearch.username
+|
+
+|UNOMI_ELASTICSEARCH_PASSWORD
+|org.apache.unomi.elasticsearch.password
+|
+
+|UNOMI_ELASTICSEARCH_SSL_TRUST_ALL_CERTIFICATES
+|org.apache.unomi.elasticsearch.sslTrustAllCertificates
+|false
+
+|UNOMI_ELASTICSEARCH_INDEXPREFIX
+|org.apache.unomi.elasticsearch.index.prefix
+|context
+
+|UNOMI_MIGRATION_RECOVER_FROM_HISTORY
+|org.apache.unomi.migration.recoverFromHistory
+|true
+
+|===
+
+If there is a need for advanced configuratiion, the configuration file used by
Apache Unomi 2.0 is located in: `etc/org.apache.unomi.migration.cfg`
+
+===== Migrate manually
+
+You can migrate manually using the Karaf console.
+
+After having started Apache Unomi 2.0 with the `./karaf` command, you will be
presented with the Karaf shell.
+
+From there you have two options:
+
+ - The necessary configuration variables (see above) have already been set,
you can start the migration using the command: `unomi:migrate 1.6.0`
+ - Or, you want to provide the configuration settings interactively via the
terminal, in that case you can start the migration in interactive mode using:
`unomi:migrate 1.6.0`
+
+The parameter of the migrate command (1.6.0 in the example above) corresponds
to the version you're migrating from.
+
+At the end of the migration, you can start Unomi 2.0 as usual using:
`unomi:start`.
+
+===== Migrate with Docker
+
+The migration can also be performed using Docker images, the migration itself
can be started by passing a specific value to the `KARAF_OPTS` environment
variable.
+
+In the context of this migration guide, we will asssume that:
+ - Custom migration scripts are located in `/home/unomi/migration/scripts/`
+ - Painless scripts, or more generally any migration assets are located in
`/home/unomi/migration/assets/`, these scripts will be mounted under
`/tmp/assets/` inside the Docker container.
+
+[source]
+----
+docker run \
+ -e UNOMI_ELASTICSEARCH_ADDRESSES=localhost:9200 \
+ -e KARAF_OPTS="-Dunomi.autoMigrate=1.6.0" \
+ --v
/home/unomi/migration/scripts/:/opt/apache-unomi/data/migration/scripts \
+ --v /home/unomi/migration/assets/:/tmp/assets/ \
+ apache/unomi:2.0.0-SNAPSHOT
+----
+
+You might need to provide additional variables (see table above) depending of
your environment.
+
+If the migration fails, you can simply restart this command.
+
+Using the above command, Unomi 2.0 will not start automatically at the end of
the migration. You can start Unomi automatically at the end of the migration by
passing: `-e KARAF_OPTS="-Dunomi.autoMigrate=1.6.0 -Dunomi.autoStart=true"`
+
+===== Step by step migration with Docker
+
+Once your cluster is shutdown, performing the migration will be as simple as
starting a dedicated docker container.
+
+===== Post Migration
+
+Once the migration has been executed, you will be able to start Apache Unomi
2.0
+
+Remember you still need to submit JSON schemas corresponding to your events,
you can do so using the API.
diff --git a/manual/src/main/asciidoc/migrations/migrations.adoc
b/manual/src/main/asciidoc/migrations/migrations.adoc
index 00c3dfbab..5f1f3bd26 100644
--- a/manual/src/main/asciidoc/migrations/migrations.adoc
+++ b/manual/src/main/asciidoc/migrations/migrations.adoc
@@ -15,6 +15,14 @@
This section contains information and steps to migrate between major Unomi
versions.
+=== From version 1.6 to 2.0
+
+include::migrate-1.6-to-2.0.adoc[]
+
+=== From version 1.5 to 1.6
+
+include::migrate-1.5-to-1.6.adoc[]
+
=== From version 1.4 to 1.5
include::migrate-1.4-to-1.5.adoc[]
diff --git a/manual/src/main/asciidoc/recipes.adoc
b/manual/src/main/asciidoc/recipes.adoc
index e878bfd9d..a074ebce1 100644
--- a/manual/src/main/asciidoc/recipes.adoc
+++ b/manual/src/main/asciidoc/recipes.adoc
@@ -10,7 +10,7 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
-//
+//
=== Recipes
==== Introduction
@@ -18,6 +18,53 @@
In this section of the documentation we provide quick recipes focused on
helping you achieve a specific result with
Apache Unomi.
+==== Enabling debug mode
+
+Although the examples provided in this documentation are correct (they will
work "as-is"),
+you might be tempted to modify them to fit your use case, which might result
in errors.
+
+The best approach during development is to enable Apache Unomi debug mode,
which will provide
+you with more detailed logs about events processing.
+
+The debug mode can be activated via the karaf SSH console (default credentials
are karaf/karaf):
+
+[source]
+----
+ubuntu@ip-10-0-3-252:~/$ ssh -p 8102 karaf@localhost
+Password authentication
+Password:
+ __ __ ____
+ / //_/____ __________ _/ __/
+ / ,< / __ `/ ___/ __ `/ /_
+ / /| |/ /_/ / / / /_/ / __/
+ /_/ |_|\__,_/_/ \__,_/_/
+
+ Apache Karaf (4.2.15)
+
+Hit '<tab>' for a list of available commands
+and '[cmd] --help' for help on a specific command.
+Hit 'system:shutdown' to shutdown Karaf.
+Hit '<ctrl-d>' or type 'logout' to disconnect shell from current session.
+
+karaf@root()> log:set DEBUG org.apache.unomi.schema.impl.SchemaServiceImpl
+----
+
+You can then either watch the logs via your preferred logging mechanism
(docker logs, log file, ...) or
+simply tail the logs to the terminal you used to enable debug mode.
+
+[source]
+----
+karaf@root()> log:tail
+08:55:28.128 DEBUG [qtp1422628821-128] Schema validation found 2 errors while
validating against schema:
https://unomi.apache.org/schemas/json/events/view/1-0-0
+08:55:28.138 DEBUG [qtp1422628821-128] Validation error: There are unevaluated
properties at following paths $.source.properties
+08:55:28.140 DEBUG [qtp1422628821-128] Validation error: There are unevaluated
properties at following paths $.source.itemId, $.source.itemType,
$.source.scope, $.source.properties
+08:55:28.142 ERROR [qtp1422628821-128] An event was rejected - switch to DEBUG
log level for more information
+----
+
+The example above shows schema validation failure at the `$.source.properties`
path.
+Note that the validation will output one log line for the exact failing path
and a log line for its parent,
+therefore to find the source of a schema validation issue it's best to start
from the top.
+
==== How to read a profile
The simplest way to retrieve profile data for the current profile is to simply
send a request to the /cxs/context.json
@@ -29,8 +76,7 @@ Here is an example that will retrieve all the session and
profile properties, as
----
curl -X POST http://localhost:8181/cxs/context.json?sessionId=1234 \
-H "Content-Type: application/json" \
--d @- <<'EOF'
-{
+--data-raw '{
"source": {
"itemId":"homepage",
"itemType":"page",
@@ -40,14 +86,13 @@ curl -X POST
http://localhost:8181/cxs/context.json?sessionId=1234 \
"requiredSessionProperties":["*"],
"requireSegments":true,
"requireScores":true
-}
-EOF
+}'
----
The `requiredProfileProperties` and `requiredSessionProperties` are properties
that take an array of property names
that should be retrieved. In this case we use the wildcard character '*' to
say we want to retrieve all the available
-properties. The structure of the JSON object that you should send is a
JSON-serialized version of the
http://unomi.apache.org/unomi-api/apidocs/org/apache/unomi/api/ContextRequest.html[ContextRequest]
-Java class.
+properties. The structure of the JSON object that you should send is a
JSON-serialized version of the
+http://unomi.apache.org/unomi-api/apidocs/org/apache/unomi/api/ContextRequest.html[ContextRequest]
Java class.
Note that it is also possible to access a profile's data through the
/cxs/profiles/ endpoint but that really should be
reserved to administrative purposes. All public accesses should always use the
/cxs/context.json endpoint for consistency
@@ -78,10 +123,8 @@ ones that you might not want to be overriden.
Instead you can use the following solutions to update profiles:
- (Preferred) Use you own custom event(s) to send data you want to be inserted
in a profile, and use rules to map the
-event data to the profile. This is simpler than it sounds, as usually all it
requires is setting up a simple rule and
-you're ready to update profiles using events. This is also the safest way to
update a profile because if you design your
-events to be as specific as possible to your needs, only the data that you
specified will be copied to the profile,
-making sure that even in the case an attacker tries to send more data using
your custom event it will simply be ignored.
+event data to the profile. This is simpler than it sounds, as usually all it
requires is setting up a simple rule,
+defining the corresponding JSON schema and you're ready to update profiles
using events.
- Use the protected built-in "updateProperties" event. This event is designed
to be used for administrative purposes
only. Again, prefer the custom events solution because as this is a protected
event it will require sending the Unomi
@@ -96,8 +139,7 @@ Let's go into more detail about the preferred way to update
a profile. Let's con
curl -X POST http://localhost:8181/cxs/rules \
--user karaf:karaf \
-H "Content-Type: application/json" \
--d @- <<'EOF'
-{
+--data-raw '{
"metadata": {
"id": "setContactInfo",
"name": "Copy the received contact info to the current profile",
@@ -136,8 +178,7 @@ curl -X POST http://localhost:8181/cxs/rules \
}
}
]
-}
-EOF
+}'
----
What this rule does is that it listen for a custom event (events don't need
any registration, you can simply start
@@ -146,14 +187,80 @@ sending them to Apache Unomi whenever you like) of type
'contactInfoSubmitted' a
course change any of the property names to find your needs. For example you
might want to prefix the profile properties
with the source of the event, such as 'mobileApp:firstName'.
-You could then simply send the `contactInfoSubmitted` event using a request
similar to this one:
+Now that our rule is defined, the next step is to create a scope and a JSON
Schema corresponding to the event to be submitted.
+
+We will start by creating a scope called "example" scope:
+[source]
+----
+curl --location --request POST 'http://localhost:8181/cxs/scopes' \
+-u 'karaf:karaf' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+"itemId": "example",
+"itemType": "scope"
+}'
+----
+
+The next step consist in creating a JSON Schema to validate our event.
+
+[source]
+----
+curl --location --request POST 'http://localhost:8181/cxs/jsonSchema' \
+-u 'karaf:karaf' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+ "$id":
"https://unomi.apache.org/schemas/json/events/contactInfoSubmitted/1-0-0",
+ "$schema": "https://json-schema.org/draft/2019-09/schema",
+ "self": {
+ "vendor": "org.apache.unomi",
+ "name": "contactInfoSubmitted",
+ "format": "jsonschema",
+ "target": "events",
+ "version": "1-0-0"
+ },
+ "title": "contactInfoSubmittedEvent",
+ "type": "object",
+ "allOf": [{ "$ref": "https://unomi.apache.org/schemas/json/event/1-0-0" }],
+ "properties": {
+ "source" : {
+ "$ref" : "https://unomi.apache.org/schemas/json/item/1-0-0"
+ },
+ "target" : {
+ "$ref" : "https://unomi.apache.org/schemas/json/item/1-0-0"
+ },
+ "properties": {
+ "type": "object",
+ "properties": {
+ "firstName": {
+ "type": ["null", "string"]
+ },
+ "lastName": {
+ "type": ["null", "string"]
+ },
+ "email": {
+ "type": ["null", "string"]
+ }
+ }
+ }
+ },
+ "unevaluatedProperties": false
+}'
+----
+
+You can notice the following in the above schema:
+
+* We are creating a schema of type "events" ("self.target" equals "events")
+* The name of this schema is "contactInfoSubmitted", this MUST match the value
of the "eventType" field in the event itself (below)
+* To simplify our schema declaration, we're referring to an already existing
schema (https://unomi.apache.org/schemas/json/item/1-0-0) to validate the
"source" and "target" properties. Apache Unomi ships with a set of predefined
JSON Schemas, detailed here:
https://github.com/apache/unomi/tree/master/extensions/json-schema/services/src/main/resources/META-INF/cxs/schemas.
+* `"unevaluatedProperties": false` indicates that the event should be rejected
if it contains any additional metadata.
+
+Finally, send the `contactInfoSubmitted` event using a request similar to this
one:
[source]
----
curl -X POST http://localhost:8181/cxs/eventcollector \
-H "Content-Type: application/json" \
--d @- <<'EOF'
-{
+--data-raw '{
"sessionId" : "1234",
"events":[
{
@@ -161,25 +268,69 @@ curl -X POST http://localhost:8181/cxs/eventcollector \
"scope": "example",
"source":{
"itemType": "site",
- "scope":"example",
+ "scope": "example",
"itemId": "mysite"
},
"target":{
- "itemType":"form",
- "scope":"example",
- "itemId":"contactForm"
+ "itemType": "form",
+ "scope": "example",
+ "itemId": "contactForm"
},
"properties" : {
- "firstName" : "John",
- "lastName" : "Doe",
- "email" : "[email protected]"
+ "firstName": "John",
+ "lastName": "Doe",
+ "email": "[email protected]"
}
}
]
-}
-EOF
+}'
----
+The event we just submitted can be retrieved using the following request:
+
+[source]
+----
+curl -X POST http://localhost:8181/cxs/events/search \
+--user karaf:karaf \
+-H "Content-Type: application/json" \
+--data-raw '{
+ "offset" : 0,
+ "limit" : 20,
+ "condition" : {
+ "type": "eventPropertyCondition",
+ "parameterValues" : {
+ "propertyName" : "properties.firstName",
+ "comparisonOperator" : "equals",
+ "propertyValue" : "John"
+ }
+ }
+}'
+----
+
+===== Troubleshooting common errors
+
+There could be two types of common errors while customizing the above requests:
+* The schema is invalid
+* The event is invalid
+
+While first submitting the schema during its creation, Apache Unomi will
validate it is syntaxically correct (JSON)
+but will not perform any further validation. Since the schema will be
processed for the first time when events are submitted,
+errors might be noticeable at that time.
+
+Those errors are usually self-explanatory, such as this one pointing to an
incorrect lcoation for the "firstName" keyword:
+[source]
+----
+09:35:56.573 WARN [qtp1421852915-83] Unknown keyword firstName - you should
define your own Meta Schema. If the keyword is irrelevant for validation, just
use a NonValidationKeyword
+----
+
+If an event is invalid, the logs will contain details about the part of the
event that did not validate against the schema.
+In the example below, an extra property "abcd" was added to the event:
+[source]
+----
+12:27:04.269 DEBUG [qtp1421852915-481] Schema validation found 1 errors while
validating against schema:
https://unomi.apache.org/schemas/json/events/contactInfoSubmitted/1-0-0
+12:27:04.272 DEBUG [qtp1421852915-481] Validation error: There are unevaluated
properties at following paths $.properties.abcd
+12:27:04.273 ERROR [qtp1421852915-481] An event was rejected - switch to DEBUG
log level for more information
+----
==== How to search for profile events
@@ -191,8 +342,8 @@ that looks something like this (and
https://unomi.apache.org/rest-api-doc/#17681
curl -X POST http://localhost:8181/cxs/events/search \
--user karaf:karaf \
-H "Content-Type: application/json" \
--d @- <<'EOF'
-{ "offset" : 0,
+--data-raw '{
+ "offset" : 0,
"limit" : 20,
"condition" : {
"type": "eventPropertyCondition",
@@ -202,8 +353,7 @@ curl -X POST http://localhost:8181/cxs/events/search \
"propertyValue" : "PROFILE_ID"
}
}
-}
-EOF
+}'
----
where PROFILE_ID is a profile identifier. This will indeed retrieve all the
events for a given profile.
@@ -224,8 +374,7 @@ on the Apache Unomi server.
curl -X POST http://localhost:8181/cxs/rules \
--user karaf:karaf \
-H "Content-Type: application/json" \
--d @- <<'EOF'
-{
+--data-raw '{
"metadata": {
"id": "exampleEventCopy",
"name": "Example Copy Event to Profile",
@@ -244,8 +393,7 @@ curl -X POST http://localhost:8181/cxs/rules \
"type": "allEventToProfilePropertiesAction"
}
]
-}
-EOF
+}'
----
The above rule will be executed if the incoming event is of type `myEvent` and
will simply copy all the properties
@@ -261,8 +409,7 @@ structure. Here's an example of a profile search with a
Query object:
curl -X POST http://localhost:8181/cxs/profiles/search \
--user karaf:karaf \
-H "Content-Type: application/json" \
--d @- <<'EOF'
-{
+--data-raw '{
"text" : "unomi",
"offset" : 0,
"limit" : 10,
@@ -289,8 +436,7 @@ curl -X POST http://localhost:8181/cxs/profiles/search \
]
}
}
-}
-EOF
+}'
----
In the above example, you search for all the profiles that have the
`leadAssignedTo` and `lastName` properties and that
diff --git a/manual/src/main/asciidoc/request-examples.adoc
b/manual/src/main/asciidoc/request-examples.adoc
index 71301c831..819e0188d 100644
--- a/manual/src/main/asciidoc/request-examples.adoc
+++ b/manual/src/main/asciidoc/request-examples.adoc
@@ -99,7 +99,7 @@ curl -X POST
http://localhost:8181/cxs/context.json?sessionId=1234 \
"itemId":"homepage",
"properties":{
"pageInfo":{
- "referringURL":""
+ "referringURL":"https://apache.org/"
}
}
}
@@ -141,7 +141,7 @@ curl -X POST http://localhost:8181/cxs/eventcollector \
"itemId":"homepage",
"properties":{
"pageInfo":{
- "referringURL":""
+ "referringURL":"https://apache.org/"
}
}
}
diff --git a/manual/src/main/asciidoc/whats-new.adoc
b/manual/src/main/asciidoc/whats-new.adoc
new file mode 100644
index 000000000..7ef355e3d
--- /dev/null
+++ b/manual/src/main/asciidoc/whats-new.adoc
@@ -0,0 +1,110 @@
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+//
+=== What's new in Apache Unomi 2.0
+
+Apache Unomi 2 is a new release focused on improving core functionalities and
robustness of the product.
+The introduction of tighter data validation with JSON Schemas required some
changes in the product data model, which presented an opportunity for
noticeable improvements in the overall performance.
+
+This new release also introduces a first (beta) version of the Unomi GraphQL
API.
+
+==== Scopes declarations are now required
+
+Scopes declarations are now required in Unomi 2. When submitting an event and
specifying a scope,
+that scope must already be declared on the platform.
+
+Scopes can be easily created via the corresponding REST API (`cxs/scopes`)
+
+For example, an "apache" scope can be created using the following API call.
+[source]
+----
+curl --location --request POST 'http://localhost:8181/cxs/scopes' \
+-u 'karaf:karaf' \
+--header 'Content-Type: application/json' \
+--data-raw '{
+"itemId": "apache",
+"itemType": "scope"
+}'
+----
+
+==== JSON Schemas
+
+Apache Unomi 2 introduces support for
https://json-schema.org/specification.html[JSON Schema] for all of its publicly
exposed endpoints.
+Data received by Apache Unomi 2 will first be validated against a known schema
to make sure it complies with an expected payload.
+If the received payload does not match a known schema, it will be rejected by
Apache Unomi 2.
+
+Apache Unomi 2 also introduces a set of administrative endpoints allowing new
schemas and/or schemas extensions to be registered.
+
+More details about JSON Schemas implementation are available in the <<JSON
schemas,corresponding section>> of the documentation.
+
+==== Updated data model
+
+The introduction of JSON schema required us to modify Apache Unomi data model,
One of the key differences is the removal of open maps.
+
+For example, the following event model in Apache Unomi 1.x:
+[source]
+----
+{
+ "TODO": "ADD JSON"
+}
+----
+
+Is replaced by the following in Apache Unomi 2.x:
+[source]
+----
+{
+ "TODO": "ADD JSON"
+}
+----
+
+Most objects were refactored as part of this new release.
+
+If using the default Apache 1.x data model, our Unomi 2 migration process will
handle the data model changes for you.
+If you are using custom events/objects, please refer to the detailed migration
guide for more details.
+
+==== Removal of the Web Tracker
+
+Apache Unomi 2.0 Web Tracker, previously located in `extensions/web-tracker/`
has been removed.
+We considered it as outdated and instead recommend implementing your own
tracker logic based on your project
+use case.
+
+[TODO: Add more details about the web tracker]
+
+==== GraphQL API (beta)
+
+Apache Unomi 2.0 sees the introduction of a new (beta) GraphQL API.
+Available behind a feature flag (the API disabled by default), the GraphQL API
is available for you to play with.
+
+More details about how to enable/disable the GraphQL API are available in the
<<GraphQL API,corresponding section>> of the documentation.
+
+We welcome tickets/PRs to improve its robustness and progressively make it
ready for prime time.
+
+==== Migrate from Unomi 1.x
+
+To facilitate migration we prepared a set of scripts that will automatically
handle the migration of your data from Apache Unomi 1.5+ to Apache Unomi 2.0.
+
+It is worth keeping in mind that for Apache Unomi 2.0 we do not support “hot”
migration,
+the migration process will require a shutdown of your cluster to guarantee
that no new events will be collected while data migration is in progress.
+
+Special caution must be taken if you declared custom events as our migration
scripts can only handle objects we know of.
+More details about migration (incl. of custom events) is available in the
corresponding section <<Migrations,corresponding section>> of the documentation.
+
+==== Elasticsearch compatibility
+
+We currently recommend using Elasticsearch 7.17.5 with Apache Unomi 2.0,
+this ensure you are on a recent version that is not impacted by the log4j
vulnerabilities (fixed in Elasticsearch 7.16.3).
+
+This version increase is releated to Apache Unomi 2.0 makeing use of a new
Elasticsearch field type
+called
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/flattened.html[Flattened],
+and although it was available in prior versions of Elasticsearch, we do not
recommend using those
+due to the above-mentioned log4j vulnerabilities.