This is an automated email from the ASF dual-hosted git repository.

shuber pushed a commit to branch UNOMI-797-migration-doc-fixes
in repository https://gitbox.apache.org/repos/asf/unomi.git


The following commit(s) were added to refs/heads/UNOMI-797-migration-doc-fixes 
by this push:
     new dac800fe9 UNOMI-797 Fixes in migration documentation - Simplified the 
text further based on the review.
dac800fe9 is described below

commit dac800fe9d9de4f8bc5f1bb41302e8f05980aa80
Author: Serge Huber <shu...@jahia.com>
AuthorDate: Fri Sep 1 09:58:07 2023 +0200

    UNOMI-797 Fixes in migration documentation
    - Simplified the text further based on the review.
---
 .../main/asciidoc/migrations/migrate-1.6-to-2.0.adoc    | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc 
b/manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
index 58faf1242..65ee4af4a 100644
--- a/manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
+++ b/manual/src/main/asciidoc/migrations/migrate-1.6-to-2.0.adoc
@@ -38,8 +38,19 @@ Once you updated your applications to align with Unomi 2 
data model, the next st
 Any event (and more generally, any object) received through Unomi public 
endpoints do require a valid JSON schema.
 Apache Unomi ships, out of the box, with all of the necessary JSON Schemas for 
its own operation as well as all event types generated from the Apache Unomi 
Web Tracker but you will need to create schemas for any custom event you may be 
using.
 
-When creating your new schemas, you can setup debug messages in the logs 
(using: `log:set DEBUG org.apache.unomi.schema.impl.SchemaServiceImpl` in Karaf 
console) that
-will point to errors in your schemas or will help you diagnose why the events 
are not being accepted. It is also possible to use the 
UNOMI_LOGS_JSONSCHEMA_LEVEL environment variable (by setting it to the `DEBUG` 
value) and then restarting Apache Unomi to accomplish the same thing. The 
second option is especially useful when using Docker containers. It is also 
possible to test if your events are valid with the a new API endpoint mapped at 
`/cxs/jsonSchema/validateEvent`.
+When creating your new schemas, there are multiple ways of testing them:
+
+- Using a the event validation API endpoint mapped at : 
`/cxs/jsonSchema/validateEvent`
+- Using debug logs when sending events using the usual ways (using the 
`/context.json` or `/eventcollector` endpoints)
+
+Note that in both cases it helps to activate the debug logs.
+
+To active the debug logs you can activate them in two ways:
+
+- Through the ssh Karaf console command : `log:set DEBUG 
org.apache.unomi.schema.impl.SchemaServiceImpl`
+- Using the UNOMI_LOGS_JSONSCHEMA_LEVEL=DEBUG environment variable and then 
restarting Apache Unomi. This is especially useful when using Docker Containers.
+
+Once the debug logs are active, you will see detailed error messages if your 
events are not matched with any deployed JSON schema.
 
 Note that it is currently not possible to modify or surcharge an existing 
system-deployed JSON schema via the REST API. It is however possible to deploy 
new schemas and manage them through the REST API on the `/cxs/jsonSchema` 
endpoint.
 If you are currently using custom properties on an Apache Unomi-provided event 
type,
@@ -58,7 +69,7 @@ While still using Unomi 1.6, the first step will be to 
upgrade your Elasticsearc
 Documentation is available on 
https://www.elastic.co/guide/en/elasticsearch/reference/7.17/setup-upgrade.html[Elasticsearch's
 website].
 
 Your Elasticsearch cluster must have enough capacity to handle the migration.
-At a minimum, the required capacity storage capacity must be greater than the 
size of the dataset in production + the size of the largest index and any other 
settings should at least be as big as the source setup (preferably higher).
+At a minimum, the required capacity storage capacity must be greater than the 
size of the dataset in production + the size of the largest index. Any other 
settings should at least be as big as the source setup (preferably higher).
 
 ==== Migrate custom data
 

Reply via email to