jkevan commented on code in PR #715:
URL: https://github.com/apache/unomi/pull/715#discussion_r1939679779


##########
services/src/main/java/org/apache/unomi/services/impl/events/EventServiceImpl.java:
##########
@@ -127,10 +116,17 @@ public String authenticateThirdPartyServer(String key, 
String ip) {
             for (Map.Entry<String, ThirdPartyServer> entry : 
thirdPartyServers.entrySet()) {
                 ThirdPartyServer server = entry.getValue();
                 if (server.getKey().equals(key)) {
-                    IPAddress ipAddress = new IPAddressString(ip).getAddress();
-                    for (IPAddress serverIpAddress : server.getIpAddresses()) {
-                        if (serverIpAddress.contains(ipAddress)) {
-                            return server.getId();
+                    // This is a fix to support proper IPv6 parsing
+                    if (ip != null) {
+                        if (ip.startsWith("[") && ip.endsWith("]")) {
+                            // This can happen with IPv6 addresses, we must 
remove the markers since our IPAddress library doesn't support them.
+                            ip = ip.substring(1, ip.length() - 1);
+                        }
+                        IPAddress ipAddress = new 
IPAddressString(ip).getAddress();
+                        for (IPAddress serverIpAddress : 
server.getIpAddresses()) {
+                            if (serverIpAddress.contains(ipAddress)) {
+                                return server.getId();
+                            }

Review Comment:
   This fix looks ok, but what is the link with open search support ?
   If it's fixing a bug or issue, would it be more appropriate to separate it 
in a dedicated PR/tests + tickets.
   (current PR is already quiet huge, making it difficult to review, I think 
that additional fixes not related to open search implem should not be part of 
the current PR.)



##########
graphql/cxs-impl/src/main/java/org/apache/unomi/graphql/condition/parsers/SegmentProfileEventsConditionParser.java:
##########
@@ -185,9 +185,16 @@ private Map<String, Object> 
createProfileEventPropertyField(final Condition cond
                 tuple.put("fieldName", "cdp_timestamp_gte");
             }
 
-            final OffsetDateTime fieldValue = 
DateUtils.offsetDateTimeFromMap((Map<String, Object>) 
condition.getParameter("propertyValueDate"));
-
-            tuple.put("fieldValue", fieldValue != null ? fieldValue.toString() 
: null);
+            Object propertyValueDate = 
condition.getParameter("propertyValueDate");
+            if (propertyValueDate == null) {
+                tuple.put("fieldValue", null);
+            } else if (propertyValueDate instanceof Map){
+                // This shouldn't be needed since Jackson was upgraded to > 
2.13, but we keep it for backwards compatibility with older data sets
+                final OffsetDateTime fieldValue = 
DateUtils.offsetDateTimeFromMap((Map<String, Object>) propertyValueDate);
+                tuple.put("fieldValue", fieldValue != null ? 
fieldValue.toString() : null);
+            } else {
+                tuple.put("fieldValue", propertyValueDate.toString());
+            }

Review Comment:
   Is this related to OpenSearch implem support ?
   or is it a separate additional fix ?



##########
itests/src/test/java/org/apache/unomi/itests/ProgressListener.java:
##########
@@ -0,0 +1,275 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.itests;
+
+import org.junit.runner.Description;
+import org.junit.runner.Result;
+import org.junit.runner.notification.Failure;
+import org.junit.runner.notification.RunListener;
+
+import java.util.PriorityQueue;
+import java.util.concurrent.atomic.AtomicInteger;
+
+public class ProgressListener extends RunListener {

Review Comment:
   What is the goal of this new class/runner ?
   A bit of java doc would be welcome to understand it.
   It's looks to be a nice addition to integration tests (even if I dont get 
exactly what it does), but I dont see the relation to open search support.
   Due to complexity of current PR that contains a lot of stuff I would greatly 
appreciate separation between the open search support and the rest that is not 
related.
   



##########
api/src/main/java/org/apache/unomi/api/services/EventService.java:
##########
@@ -161,4 +161,10 @@ public interface EventService {
      * @param profileId identifier of the profile that we want to remove it's 
events
      */
     void removeProfileEvents(String profileId);
+
+    /**
+     * Delete an event by specifying its event identifier
+     * @param eventIdentifier the unique identifier for the event
+     */
+    void deleteEvent(String eventIdentifier);

Review Comment:
   Those new API looks convenient, but I don't see the relation with open 
search implem.
   What is the need to introduce it now ?
   I dont see any test coverage for this new method.



##########
itests/src/test/java/org/apache/unomi/itests/RuleServiceIT.java:
##########
@@ -80,16 +80,27 @@ public void testRuleWithNullActions() throws 
InterruptedException {
     public void getAllRulesShouldReturnAllRulesAvailable() throws 
InterruptedException {
         String ruleIDBase = "moreThan50RuleTest";
         int originalRulesNumber = rulesService.getAllRules().size();
+
+        // Create a simple condition instead of null
+        Condition defaultCondition = new 
Condition(definitionsService.getConditionType("matchAllCondition"));
+
+        // Create a default action
+        Action defaultAction = new 
Action(definitionsService.getActionType("setPropertyAction"));
+        defaultAction.setParameter("propertyName", "testProperty");
+        defaultAction.setParameter("propertyValue", "testValue");
+        List<Action> actions = Collections.singletonList(defaultAction);
+
+
         for (int i = 0; i < 60; i++) {
             String ruleID = ruleIDBase + "_" + i;
             Metadata metadata = new Metadata(ruleID);
             metadata.setName(ruleID);
             metadata.setDescription(ruleID);
             metadata.setScope(TEST_SCOPE);
-            Rule nullRule = new Rule(metadata);
-            nullRule.setCondition(null);
-            nullRule.setActions(null);
-            createAndWaitForRule(nullRule);
+            Rule rule = new Rule(metadata);
+            rule.setCondition(defaultCondition);  // Use default condition 
instead of null
+            rule.setActions(actions);  // Empty list instead of null

Review Comment:
   Your comment seem's not correct, `actions` does contains an action actually.



##########
persistence-spi/src/main/java/org/apache/unomi/persistence/spi/CustomObjectMapper.java:
##########
@@ -58,6 +58,7 @@ public CustomObjectMapper() {
     public CustomObjectMapper(Map<Class, StdDeserializer<?>> deserializers) {
         super();
         super.registerModule(new JaxbAnnotationModule());
+        super.registerModule(new 
com.fasterxml.jackson.datatype.jsr310.JavaTimeModule());

Review Comment:
   Could you add a comment to explain the necessity of this new jackson module ?



##########
rest/src/main/java/org/apache/unomi/rest/service/impl/RestServiceUtilsImpl.java:
##########
@@ -145,7 +145,13 @@ public EventsRequestContext initEventsRequest(String 
scope, String sessionId, St
                         // Session user has been switched, profile id in 
cookie is not up to date
                         // We must reload the profile with the session ID as 
some properties could be missing from the session profile
                         // #personalIdentifier
-                        
eventsRequestContext.setProfile(profileService.load(sessionProfile.getItemId()));
+                        Profile sessionProfileWithId = 
profileService.load(sessionProfile.getItemId());
+                        if (sessionProfileWithId != null) {
+                            
eventsRequestContext.setProfile(sessionProfileWithId);
+                        } else {
+                            LOGGER.warn("Couldn't find profile ID {} 
referenced from session with ID {}, so we re-create it", 
sessionProfile.getItemId(), sessionId);
+                            
eventsRequestContext.setProfile(createNewProfile(sessionProfile.getItemId(), 
timestamp));
+                        }

Review Comment:
   Interesting check for ensuring profile is correctly created, but this does 
not look related to open search support, and would prefer to have a dedicated 
ticket and test coverage for this new handling.



##########
tools/shell-dev-commands/src/main/java/org/apache/unomi/shell/commands/EventRemove.java:
##########
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.shell.commands;
+
+import org.apache.karaf.shell.api.action.Argument;
+import org.apache.karaf.shell.api.action.Command;
+import org.apache.karaf.shell.api.action.lifecycle.Reference;
+import org.apache.karaf.shell.api.action.lifecycle.Service;
+import org.apache.unomi.api.services.EventService;
+
+@Command(scope = "unomi", name = "event-remove", description = "This command 
will remove an event.")
+@Service
+public class EventRemove extends RemoveCommandSupport {

Review Comment:
   Nice addition, but not related to open search support ?



##########
manual/src/main/asciidoc/configuration.adoc:
##########
@@ -68,20 +68,53 @@ 
org.apache.unomi.cluster.public.address=http://localhost:8181
 org.apache.unomi.cluster.internal.address=https://localhost:9443
 ----
 
-If you need to specify an ElasticSearch cluster name, or a host and port that 
are different than the default,
-it is recommended to do this BEFORE you start the server for the first time, 
or you will loose all the data
+If you need to specify a search engine configuration that is different than 
the default,
+it is recommended to do this BEFORE you start the server for the first time, 
or you will lose all the data
 you have stored previously.
 
-You can use the following properties for the ElasticSearch configuration
+Apache Unomi supports both ElasticSearch and OpenSearch as search engine 
backends. Here are the configuration properties for each:
+
+For ElasticSearch:
 [source]
 ----
 org.apache.unomi.elasticsearch.cluster.name=contextElasticSearch
-# The elasticsearch.adresses may be a comma seperated list of host names and 
ports such as
+# The elasticsearch.addresses may be a comma separated list of host names and 
ports such as
 # hostA:9200,hostB:9200
 # Note: the port number must be repeated for each host.
 org.apache.unomi.elasticsearch.addresses=localhost:9200
 ----
 
+For OpenSearch:
+[source]
+----
+org.apache.unomi.opensearch.cluster.name=opensearch-cluster
+# The opensearch.addresses may be a comma separated list of host names and 
ports such as
+# hostA:9200,hostB:9200
+# Note: the port number must be repeated for each host.
+org.apache.unomi.opensearch.addresses=localhost:9200
+
+# OpenSearch security settings (required by default since OpenSearch 2.0)
+org.apache.unomi.opensearch.ssl.enable=true
+org.apache.unomi.opensearch.username=admin
+org.apache.unomi.opensearch.password=${env:OPENSEARCH_INITIAL_ADMIN_PASSWORD:-admin}
+org.apache.unomi.opensearch.sslTrustAllCertificates=true
+----
+
+To select which search engine to use, you can:
+1. Use the appropriate configuration properties above
+2. When building from source, use the appropriate Maven profile:
+   * For ElasticSearch (default): no special profile needed
+   * For OpenSearch: add `-Duse.opensearch=true` to your Maven command

Review Comment:
   To be clarified, mention that it have an impact on build, but this 
`use.opensearch` seem's only for integration tests purpose. So I'm not sure 
it's interesting to document this option in this place.



##########
extensions/privacy-extension/rest/pom.xml:
##########
@@ -88,6 +88,7 @@
 
     <build>
         <plugins>
+            <!--

Review Comment:
   Why is this disabled ? 
   A comment with explaination here would be welcome.



##########
docker/src/main/docker/entrypoint.sh:
##########
@@ -16,28 +16,110 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
################################################################################
-# Wait for healthy ElasticSearch
-# next wait for ES status to turn to Green
 
-if [ "$UNOMI_ELASTICSEARCH_SSL_ENABLE" == 'true' ]; then
-  schema='https'
+# Near the top of the file, add these default debug settings
+KARAF_DEBUG=${KARAF_DEBUG:-false}
+KARAF_DEBUG_PORT=${KARAF_DEBUG_PORT:-5005}
+KARAF_DEBUG_SUSPEND=${KARAF_DEBUG_SUSPEND:-n}
+
+# Before starting Karaf, add debug configuration
+if [ "$KARAF_DEBUG" = "true" ]; then
+    echo "Enabling Karaf debug mode on port $KARAF_DEBUG_PORT 
(suspend=$KARAF_DEBUG_SUSPEND)"
+    export 
JAVA_DEBUG_OPTS="-agentlib:jdwp=transport=dt_socket,server=y,suspend=$KARAF_DEBUG_SUSPEND,address=*:$KARAF_DEBUG_PORT"
+    export KARAF_DEBUG=true
+fi
+
+# Determine search engine type from UNOMI_AUTO_START
+SEARCH_ENGINE="${UNOMI_AUTO_START:-elasticsearch}"
+export KARAF_OPTS="-Dunomi.autoStart=${UNOMI_AUTO_START}"
+
+if [ "$SEARCH_ENGINE" = "true" ]; then
+    SEARCH_ENGINE="elasticsearch"
+fi
+echo "SEARCH_ENGINE: $SEARCH_ENGINE"
+echo "KARAF_OPTS: $KARAF_OPTS"
+
+# Function to check cluster health for a specific node
+check_node_health() {
+    local node_url="$1"
+    local curl_opts="$2"
+    response=$(eval curl -v -fsSL ${curl_opts} "${node_url}" 2>&1)
+    if [ $? -eq 0 ]; then
+        echo "$response" | grep -o '"status"[ ]*:[ ]*"[^"]*"' | cut -d'"' -f4
+    else
+        echo ""
+    fi
+}
+
+# Configure connection parameters based on search engine type
+if [ "$SEARCH_ENGINE" = "opensearch" ]; then
+    # OpenSearch configuration
+    if [ -z "$UNOMI_OPENSEARCH_PASSWORD" ]; then
+        echo "Error: UNOMI_OPENSEARCH_PASSWORD must be set when using 
OpenSearch"
+        exit 1
+    fi
+
+    schema='https'
+    auth_header="Authorization: Basic $(echo -n 
"admin:${UNOMI_OPENSEARCH_PASSWORD}" | base64)"
+    health_endpoint="_cluster/health"
+    curl_opts="-k -H \"${auth_header}\" -H \"Content-Type: application/json\""
 else
-  schema='http'
+    # Elasticsearch configuration
+    if [ "$UNOMI_ELASTICSEARCH_SSL_ENABLE" = 'true' ]; then
+        schema='https'
+    else
+        schema='http'
+    fi
+
+    if [ -v UNOMI_ELASTICSEARCH_USERNAME ] && [ -v 
UNOMI_ELASTICSEARCH_PASSWORD ]; then
+        auth_header="Authorization: Basic $(echo -n 
"${UNOMI_ELASTICSEARCH_USERNAME}:${UNOMI_ELASTICSEARCH_PASSWORD}" | base64)"
+        curl_opts="-H \"${auth_header}\""
+    fi
+    health_endpoint="_cluster/health"
 fi
 
-if [ -v UNOMI_ELASTICSEARCH_USERNAME ] && [ -v UNOMI_ELASTICSEARCH_PASSWORD ]; 
then
-  
elasticsearch_addresses="$schema://$UNOMI_ELASTICSEARCH_USERNAME:$UNOMI_ELASTICSEARCH_PASSWORD@$UNOMI_ELASTICSEARCH_ADDRESSES/_cat/health?h=status"
+# Build array of node URLs
+if [ "$SEARCH_ENGINE" = "opensearch" ]; then

Review Comment:
   Seem's this if/else testing again: `$SEARCH_ENGINE`
   could be merge with the previous one: 
https://github.com/apache/unomi/pull/715/files#diff-f7001630e8e432c22a83c67318dbe3b91a44ccd7ffd2548251309ba4551ccdf1R55.



##########
docker/README.md:
##########
@@ -48,21 +48,66 @@ For ElasticSearch:
     docker pull docker.elastic.co/elasticsearch/elasticsearch:7.4.2
     docker network create unomi
     docker run --name elasticsearch --net unomi -p 9200:9200 -p 9300:9300 -e 
"discovery.type=single-node" -e cluster.name=contextElasticSearch 
docker.elastic.co/elasticsearch/elasticsearch:7.4.2
+
+For OpenSearch:
+
+    docker pull opensearchproject/opensearch:2.18.0
+    docker network create unomi
+    export OPENSEARCH_ADMIN_PASSWORD=enter_your_custom_admin_password_here
+    docker run --name opensearch --net unomi -p 9200:9200 -p 9300:9300 -e 
"discovery.type=single-node" -e 
OPENSEARCH_INITIAL_ADMIN_PASSWORD=${OPENSEARCH_ADMIN_PASSWORD} 
opensearchproject/opensearch:2.18.0
     
-For Unomi:
+For Unomi (with ElasticSearch):
+
+    docker pull apache/unomi:2.7.0-SNAPSHOT
+    docker run --name unomi --net unomi -p 8181:8181 -p 9443:9443 -p 8102:8102 
\
+        -e UNOMI_ELASTICSEARCH_ADDRESSES=elasticsearch:9200 \
+        apache/unomi:2.7.0-SNAPSHOT
 
-    docker pull apache/unomi:2.0.0-SNAPSHOT
-    docker run --name unomi --net unomi -p 8181:8181 -p 9443:9443 -p 8102:8102 
-e UNOMI_ELASTICSEARCH_ADDRESSES=elasticsearch:9200 apache/unomi:2.0.0-SNAPSHOT
+For Unomi (with OpenSearch):
 
-## Using a host OS ElasticSearch installation (only supported on macOS & 
Windows)
+    docker pull apache/unomi:2.7.0-SNAPSHOT
+    docker run --name unomi --net unomi -p 8181:8181 -p 9443:9443 -p 8102:8102 
\
+        -e UNOMI_AUTO_START=opensearch \
+        -e UNOMI_OPENSEARCH_ADDRESSES=opensearch:9200 \
+        -e UNOMI_OPENSEARCH_PASSWORD=${OPENSEARCH_ADMIN_PASSWORD} \
+        apache/unomi:2.7.0-SNAPSHOT
+
+## Using a host OS Search Engine installation (only supported on macOS & 
Windows)
+
+For ElasticSearch:
 
-    docker run --name unomi -p 8181:8181 -p 9443:9443 -p 8102:8102 -e 
UNOMI_ELASTICSEARCH_ADDRESSES=host.docker.internal:9200 
apache/unomi:2.0.0-SNAPSHOT
+    docker run --name unomi -p 8181:8181 -p 9443:9443 -p 8102:8102 \
+        -e UNOMI_ELASTICSEARCH_ADDRESSES=host.docker.internal:9200 \
+        apache/unomi:2.7.0-SNAPSHOT
+
+For OpenSearch:
+
+    docker run --name unomi -p 8181:8181 -p 9443:9443 -p 8102:8102 \
+        -e UNOMI_AUTO_START=opensearch \
+        -e UNOMI_OPENSEARCH_ADDRESSES=host.docker.internal:9200 \
+        -e UNOMI_OPENSEARCH_PASSWORD=${OPENSEARCH_ADMIN_PASSWORD} \
+        apache/unomi:2.7.0-SNAPSHOT
 
 Note: Linux doesn't support the host.docker.internal DNS lookup method yet, it 
should be available in an upcoming version of Docker. See 
https://github.com/docker/for-linux/issues/264
 
+## Environment Variables
+
+### Common Variables
+- `UNOMI_AUTO_START`: Specifies the search engine type (`elasticsearch` or 
`opensearch`, defaults to `elasticsearch`)

Review Comment:
   `UNOMI_AUTO_START` naming does not reflect that it's used for choosing which 
persistence implem gonna be use. I would suggest to look for a more meaningful 
naming for this option.



##########
api/src/main/java/org/apache/unomi/api/services/ProfileService.java:
##########
@@ -443,4 +443,10 @@ default Session loadSession(String sessionId) {
      */
     @Deprecated
     void purgeMonthlyItems(int existsNumberOfMonths);
+
+    /**
+     * Delete a session using its identifier
+     * @param sessionIdentifier the unique identifier for the session
+     */
+    void deleteSession(String sessionIdentifier);

Review Comment:
   Those new API looks convenient, but I don't see the relation with open 
search implem.
   What is the need to introduce it now ?
   I dont see any test coverage for this new method.



##########
itests/README.md:
##########
@@ -260,3 +281,60 @@ And the final step is, zipping the new version of the 
snapshot repository and re
 > In case you are using docker, do zip in the container and use `docker cp` to 
 > get the zip file from the docker container.
 
 Now you can modify the migration test class to test that your added data in 
1.6.x is correctly migrated in 2.0.0
+
+# Known issues
+
+In the OpenSearch test logs, you will see a lot of lines that look like this : 
+
+    opensearch> [2024-12-31T15:33:14,652][WARN ][o.o.w.QueryGroupTask     ] 
[f3200971b164] QueryGroup _id can't be null, It should be set before accessing 
it. This is abnormal behaviour
+
+This is due to a bug in OpenSearch 2.18 but it has no impact on the actual 
functionality. You can track this bug here:
+
+    https://github.com/opensearch-project/OpenSearch/issues/16874
+
+## Karaf Tools
+
+The `kt.sh` script (short for "Karaf Tools") provides convenient utilities for 
working with Karaf logs and directories during integration testing. Since Karaf 
test directories are created with unique UUIDs for each test run, this script 
helps locate and work with the latest test instance.
+
+### Usage
+
+```bash
+./kt.sh COMMAND [ARGS]
+```
+
+### Available Commands
+
+| Command      | Alias | Description                                           
|
+|-------------|-------|-------------------------------------------------------|
+| `log`       | `l`   | View the latest Karaf log file using less            |
+| `tail`      | `t`   | Tail the current Karaf log file                      |
+| `grep`      | `g`   | Grep the latest Karaf log file (requires pattern)    |
+| `dir`       | `d`   | Print the latest Karaf directory path                |
+| `pushd`     | `p`   | Change to the latest Karaf directory using pushd     |
+| `help`      | `h`   | Show help message                                    |

Review Comment:
   Karaf tools looks to be a nice addition, but I would prefer to have this 
work out the current PR, it doesn't sounds related to open search support, and 
therefore should be reviewed in it's own scope ticket/review etc.



##########
itests/src/test/java/org/apache/unomi/itests/BaseIT.java:
##########
@@ -309,6 +375,9 @@ public Option[] config() {
             
karafOptions.add(editConfigurationFilePut("etc/org.ops4j.pax.logging.cfg", 
"log4j2.logger.customLogging.level", customLoggingParts[1]));
         }
 
+        searchEngine = System.getProperty(SEARCH_ENGINE_PROPERTY, 
SEARCH_ENGINE_ELASTICSEARCH);
+        System.out.println("Search Engine: " + searchEngine);

Review Comment:
   Should we not use the `LOGGER` instead of SysOut ?



##########
tools/shell-dev-commands/src/main/java/org/apache/unomi/shell/commands/ActionRemove.java:
##########
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.shell.commands;
+
+import org.apache.karaf.shell.api.action.Argument;
+import org.apache.karaf.shell.api.action.Command;
+import org.apache.karaf.shell.api.action.lifecycle.Reference;
+import org.apache.karaf.shell.api.action.lifecycle.Service;
+import org.apache.unomi.api.services.DefinitionsService;
+
+@Command(scope = "unomi", name = "action-remove", description = "This command 
will remove an action type.")
+@Service
+public class ActionRemove extends RemoveCommandSupport {

Review Comment:
   Nice addition, but not related to open search support ?



##########
itests/src/test/java/org/apache/unomi/itests/JSONSchemaIT.java:
##########
@@ -340,7 +339,14 @@ public void testFlattenedProperties() throws Exception {
         condition.setParameter("propertyName", 
"flattenedProperties.interests.cars");
         condition.setParameter("comparisonOperator", "greaterThan");
         condition.setParameter("propertyValueInteger", 2);
-        assertNull(persistenceService.query(condition, null, Event.class, 0, 
-1));
+        // OpenSearch handles flattened fields differently than Elasticsearch
+        if ("opensearch".equals(searchEngine)) {
+            assertNotNull("OpenSearch should return results for flattened 
properties",
+                persistenceService.query(condition, null, Event.class, 0, -1));
+        } else {
+            assertNull("Elasticsearch should return null for flattened 
properties",
+                persistenceService.query(condition, null, Event.class, 0, -1));
+        }

Review Comment:
   This could be a problem, does it means that implems doesn't behave the same ?



##########
itests/src/test/resources/org.apache.unomi.healthcheck.cfg:
##########
@@ -22,10 +22,18 @@ esLogin = ${org.apache.unomi.elasticsearch.username:-}
 esPassword = ${org.apache.unomi.elasticsearch.password:-}
 httpClient.trustAllCertificates = 
${org.apache.unomi.elasticsearch.sslTrustAllCertificates:-false}
 
+# OpenSearch configuration
+osAddresses = ${org.apache.unomi.opensearch.addresses:-localhost:9200}
+osSSLEnabled = ${org.apache.unomi.opensearch.sslEnable:-false}
+osLogin = ${org.apache.unomi.opensearch.username:-admin}
+osPassword = ${org.apache.unomi.opensearch.password:-}
+osMinimalClusterState = 
${org.apache.unomi.opensearch.minimalClusterState:-YELLOW}
+httpClient.trustAllCertificates = 
${org.apache.unomi.opensearch.sslTrustAllCertificates:-false}

Review Comment:
   is it not conflicting with the es setting ?
   it looks like it will override ES settings. may be safer to use an other 
name.



##########
kar/src/main/feature/feature.xml:
##########
@@ -16,12 +16,12 @@
   ~ limitations under the License.
   -->
 
-<features name="unomi-kar" 
xmlns="http://karaf.apache.org/xmlns/features/v1.3.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xsi:schemaLocation="http://karaf.apache.org/xmlns/features/v1.3.0 
http://karaf.apache.org/xmlns/features/v1.3.0";>
+<features name="unomi-features" 
xmlns="http://karaf.apache.org/xmlns/features/v1.3.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"; 
xsi:schemaLocation="http://karaf.apache.org/xmlns/features/v1.3.0 
http://karaf.apache.org/xmlns/features/v1.3.0";>

Review Comment:
   All this kar -> feature improvement is welcome, and I do understand the 
need, to handle correctly different persistence layer implems.
   But I'm scared that some project will be impacted, I'm thinking of project 
that are repackaging unomi with additional features, they probably rely on the 
existing structure/naming of the kar/features in places.



##########
itests/src/test/java/org/apache/unomi/itests/BaseIT.java:
##########
@@ -200,13 +218,28 @@ public void waitForStartup() throws InterruptedException {
         httpClient = initHttpClient(getHttpClientCredentialProvider());
     }
 
+    private void waitForUnomiManagementService() throws InterruptedException {
+        UnomiManagementService unomiManagementService = 
getOsgiService(UnomiManagementService.class, 600000);
+        while (unomiManagementService == null) {
+            LOGGER.info("Waiting for Unomi Management Service to be 
available...");
+            Thread.sleep(1000);
+            unomiManagementService = 
getOsgiService(UnomiManagementService.class, 600000);
+        }

Review Comment:
   Could it generate endless loop ?
   Should it not be found correctly the first time with the first call to 
getOsgiService, the timeout looks reasonable.



##########
package/src/main/resources/etc/custom.system.properties:
##########
@@ -165,6 +165,80 @@ 
org.apache.unomi.elasticsearch.password=${env:UNOMI_ELASTICSEARCH_PASSWORD:-}
 
org.apache.unomi.elasticsearch.sslEnable=${env:UNOMI_ELASTICSEARCH_SSL_ENABLE:-false}
 
org.apache.unomi.elasticsearch.sslTrustAllCertificates=${env:UNOMI_ELASTICSEARCH_SSL_TRUST_ALL_CERTIFICATES:-false}
 
+#######################################################################################################################
+## OpenSearch settings                                                         
                                      ##
+#######################################################################################################################
+org.apache.unomi.opensearch.cluster.name=${env:UNOMI_OPENSEARCH_CLUSTERNAME:-opensearch-cluster}
+# The opensearch.addresses may be a comma seperated list of host names and 
ports such as
+# hostA:9200,hostB:9200
+# Note: the port number must be repeated for each host.
+org.apache.unomi.opensearch.addresses=${env:UNOMI_OPENSEARCH_ADDRESSES:-localhost:9200}
+# refresh policy per item type in Json.
+# Valid values are WAIT_UNTIL/IMMEDIATE/NONE. The default refresh policy is 
NONE.
+# Example: "{"event":"WAIT_UNTIL","rule":"NONE"}
+org.apache.unomi.opensearch.itemTypeToRefreshPolicy=${env:UNOMI_OPENSEARCH_REFRESH_POLICY_PER_ITEM_TYPE:-}
+org.apache.unomi.opensearch.fatalIllegalStateErrors=${env:UNOMI_OPENSEARCH_FATAL_STATE_ERRORS:-}
+org.apache.unomi.opensearch.index.prefix=${env:UNOMI_OPENSEARCH_INDEXPREFIX:-context}
+
+# These monthlyIndex properties are now deprecated, please use rollover 
equivalent.

Review Comment:
   Deprecate props could probably not be re-add to the open search implem ?



##########
itests/src/test/java/org/apache/unomi/itests/BaseIT.java:
##########
@@ -164,10 +167,25 @@ public abstract class BaseIT extends KarafTestSupport {
     public void waitForStartup() throws InterruptedException {
         // disable retry
         retry = new KarafTestSupport.Retry(false);
+        searchEngine = System.getProperty(SEARCH_ENGINE_PROPERTY, 
SEARCH_ENGINE_ELASTICSEARCH);
 
         // Start Unomi if not already done
         if (!unomiStarted) {
-            executeCommand("unomi:start");
+            // We must check that the Unomi Management Service is up and 
running before launching the
+            // command otherwise the start configuration will not be properly 
populated.
+            waitForUnomiManagementService();
+            if (SEARCH_ENGINE_ELASTICSEARCH.equals(searchEngine)) {
+                LOGGER.info("Starting Unomi with elasticsearch search 
engine...");
+                System.out.println("==== Starting Unomi with elasticsearch 
search engine...");
+                executeCommand("unomi:start");
+            } else if (SEARCH_ENGINE_OPENSEARCH.equals(searchEngine)){
+                LOGGER.info("Starting Unomi with opensearch search engine...");
+                System.out.println("==== Starting Unomi with opensearch search 
engine...");
+                executeCommand("unomi:start " + SEARCH_ENGINE_OPENSEARCH);

Review Comment:
   I am not sure that mixin the unomi:start command with the search engine to 
be used is a good idea, normally config should be self suficient to configure 
properly the search engine, without the need to complexify the `unomi:start` 
command.



##########
itests/src/test/java/org/apache/unomi/itests/ProgressSuite.java:
##########
@@ -0,0 +1,81 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.itests;
+
+import org.junit.Test;
+import org.junit.runner.Description;
+import org.junit.runner.notification.RunNotifier;
+import org.junit.runners.Suite;
+import org.junit.runners.model.InitializationError;
+
+import java.lang.reflect.Method;
+import java.util.concurrent.atomic.AtomicInteger;
+
+public class ProgressSuite extends Suite {

Review Comment:
   Same a bit of java doc would be welcome to understand what is it used for.
   (looks also out of open search support scope.)



##########
extensions/web-tracker/wab/yarn.lock:
##########
@@ -4,27 +4,27 @@
 
 "@ampproject/remapping@^2.1.0":
   version "2.2.0"
-  resolved 
"https://registry.yarnpkg.com/@ampproject/remapping/-/remapping-2.2.0.tgz#56c133824780de3174aed5ab6834f3026790154d";

Review Comment:
   Do you know why this file have been updated ?



##########
plugins/baseplugin/src/main/java/org/apache/unomi/plugins/baseplugin/conditions/PropertyConditionEvaluator.java:
##########
@@ -91,14 +93,15 @@ private int compare(Object actualValue, String 
expectedValue, Object expectedVal
         } else if (expectedValueDateExpr != null) {
             return 
getDate(actualValue).compareTo(getDate(expectedValueDateExpr));
         } else {
-            return actualValue.toString().compareTo(expectedValue);
+            // We use toLowerCase here to match the behavior of the analyzer 
configuration in the persistence configuration
+            return 
actualValue.toString().toLowerCase().compareTo(expectedValue);

Review Comment:
   Should we not call `ConditionContextHelper.foldToASCII` instead of manual 
lowercasing here ?



##########
itests/src/test/java/org/apache/unomi/itests/BaseIT.java:
##########
@@ -223,12 +256,34 @@ protected void refreshPersistence(final Class<? extends 
Item>... classes) throws
 
     @Override
     public MavenArtifactUrlReference getKarafDistribution() {
-        return 
CoreOptions.maven().groupId("org.apache.unomi").artifactId("unomi").versionAsInProject().type("tar.gz");
+        return 
maven().groupId("org.apache.unomi").artifactId("unomi").versionAsInProject().type("tar.gz");
     }
 
     @Configuration
     public Option[] config() {
         System.out.println("==== Configuring container");
+
+        searchEngine = System.getProperty(SEARCH_ENGINE_PROPERTY, 
SEARCH_ENGINE_ELASTICSEARCH);
+        System.out.println("Search Engine: " + searchEngine);
+
+        // Define features option based on search engine
+        Option featuresOption;
+        if (SEARCH_ENGINE_ELASTICSEARCH.equals(searchEngine)) {
+            featuresOption = features(maven().groupId("org.apache.unomi")
+                    
.artifactId("unomi-kar").versionAsInProject().type("xml").classifier("features"),
+                    "unomi-persistence-elasticsearch", "unomi-services",
+                    "unomi-router-karaf-feature", "unomi-groovy-actions",
+                    "unomi-web-applications", "unomi-rest-ui", 
"unomi-healthcheck", "cdp-graphql-feature");
+        } else if (SEARCH_ENGINE_OPENSEARCH.equals(searchEngine)) {
+            featuresOption = features(maven().groupId("org.apache.unomi")
+                    
.artifactId("unomi-kar").versionAsInProject().type("xml").classifier("features"),
+                    "unomi-persistence-opensearch", "unomi-services",
+                    "unomi-router-karaf-feature", "unomi-groovy-actions",
+                    "unomi-web-applications", "unomi-rest-ui", 
"unomi-healthcheck", "cdp-graphql-feature");
+        } else {
+            throw new IllegalArgumentException("Unknown search engine: " + 
searchEngine);
+        }

Review Comment:
   Seem's it could be simplified if the only difference in this two block is: 
`unomi-persistence-opensearch` and `unomi-persistence-elasticsearch`



##########
itests/src/test/java/org/apache/unomi/itests/migration/Migrate16xTo220IT.java:
##########
@@ -59,16 +79,31 @@ public void waitForStartup() throws InterruptedException {
             // Restore the snapshot
             HttpUtils.executePostRequest(httpClient, 
"http://localhost:9400/_snapshot/snapshots_repository/snapshot_1.6.x/_restore?wait_for_completion=true";,
 "{}", null);
 
+            String snapshotStatus = HttpUtils.executeGetRequest(httpClient, 
"http://localhost:9400/_snapshot/_status";, null);
+            System.out.println(snapshotStatus);
+            LOGGER.info(snapshotStatus);
+

Review Comment:
   is it necessary ?



##########
persistence-spi/src/main/java/org/apache/unomi/persistence/spi/conditions/DateUtils.java:
##########
@@ -0,0 +1,54 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.persistence.spi.conditions;
+
+import 
org.apache.unomi.persistence.spi.conditions.datemath.DateMathParseException;
+import org.apache.unomi.persistence.spi.conditions.datemath.DateMathParser;
+import org.apache.unomi.persistence.spi.conditions.datemath.JavaDateFormatter;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.time.Instant;
+import java.time.ZoneOffset;
+import java.time.format.DateTimeFormatter;
+import java.util.Date;
+
+public class DateUtils {

Review Comment:
   A little bit of javadoc would be welcome here. both on class and methods.



##########
persistence-spi/src/main/java/org/apache/unomi/persistence/spi/conditions/datemath/JavaDateFormatter.java:
##########
@@ -0,0 +1,351 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.persistence.spi.conditions.datemath;
+
+import java.time.*;
+import java.time.format.DateTimeFormatter;
+import java.time.format.DateTimeFormatterBuilder;
+import java.time.temporal.ChronoField;
+import java.time.temporal.TemporalAccessor;
+import java.time.temporal.TemporalQueries;
+import java.util.ArrayList;
+import java.util.List;
+
+public class JavaDateFormatter {

Review Comment:
   What is the reason for re-implementing a Java date formatter, what was wrong 
with the previous `JodaDateMathParser` ?
   A bit of context with comments/java doc would be greatly appreciate the 
necessity of this new implem.



##########
persistence-spi/src/main/java/org/apache/unomi/persistence/spi/conditions/ConditionEvaluatorDispatcher.java:
##########
@@ -0,0 +1,32 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.persistence.spi.conditions;
+
+import org.apache.unomi.api.Item;
+import org.apache.unomi.api.conditions.Condition;
+
+import java.util.Map;
+
+public interface ConditionEvaluatorDispatcher {

Review Comment:
   Missing java doc on this interface.



##########
persistence-spi/src/main/java/org/apache/unomi/persistence/spi/conditions/geo/DistanceUnit.java:
##########
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.persistence.spi.conditions.geo;
+
+import java.util.HashMap;
+import java.util.Map;
+
+public enum DistanceUnit {

Review Comment:
   I understand that because those classes was previously used from 
elasticsearch library, you tried best effort to replace them by home made 
implementation.
   May be a using JSR 385 would be better idea for maintanability ? 
   https://github.com/unitsofmeasurement/indriya



##########
persistence-spi/src/main/java/org/apache/unomi/persistence/spi/conditions/datemath/DateMathParser.java:
##########
@@ -0,0 +1,270 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.persistence.spi.conditions.datemath;
+
+import java.time.*;
+import java.time.format.DateTimeFormatter;
+import java.time.temporal.ChronoField;
+import java.time.temporal.TemporalAccessor;
+import java.time.temporal.TemporalAdjusters;
+import java.time.temporal.TemporalQueries;
+import java.util.function.Function;
+import java.util.function.LongSupplier;
+
+public class DateMathParser {

Review Comment:
   Missing javadoc here, would appreciate some explaination regarding the 
necessity of a custom date math parser VS using existing implems.



##########
persistence-spi/src/main/java/org/apache/unomi/persistence/spi/conditions/geo/GeoDistance.java:
##########
@@ -0,0 +1,102 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.persistence.spi.conditions.geo;
+
+public enum GeoDistance {

Review Comment:
   Same proposition, should we consider using jsr 385 
(https://github.com/unitsofmeasurement/indriya) instead of re-implementing 
those utilities ?



##########
persistence-spi/src/main/java/org/apache/unomi/persistence/spi/conditions/PastEventConditionPersistenceQueryBuilder.java:
##########
@@ -0,0 +1,31 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.persistence.spi.conditions;
+
+import org.apache.unomi.api.conditions.Condition;
+import org.apache.unomi.api.services.DefinitionsService;
+import org.apache.unomi.scripting.ScriptExecutor;
+
+import java.util.Map;
+
+public interface PastEventConditionPersistenceQueryBuilder {

Review Comment:
   Missing javadoc on this new interface.



##########
plugins/hover-event/src/main/java/org/apache/unomi/plugins/events/hover/querybuilders/HoverEventConditionESQueryBuilder.java:
##########


Review Comment:
   Should we not move this ESQueryBuilder to the elasticsearch persistence 
package ?



##########
persistence-elasticsearch/core/src/main/java/org/apache/unomi/persistence/elasticsearch/ElasticSearchPersistenceServiceImpl.java:
##########
@@ -1307,7 +1317,7 @@ protected Boolean execute(Object... args) throws 
Exception {
                         itemType = customItemType;
                     }
                     String documentId = getDocumentIDForItemType(itemId, 
itemType);
-                    String index = getIndexNameForQuery(itemType);
+                    String index = getIndex(itemType);

Review Comment:
   I'm not sure this will works for rollover indices like sessions and events.
   What if the to_be_removed_itemId is in a past rolled over indice ?



##########
tools/shell-commands/src/main/java/org/apache/unomi/shell/actions/Start.java:
##########
@@ -29,8 +31,20 @@ public class Start implements Action {
     @Reference
     UnomiManagementService unomiManagementService;
 
+    @Argument(name = "persistence", description = "Persistence implementation 
to use (elasticsearch/opensearch)", valueToShowInHelp = "elasticsearch")
+    private String selectedPersistenceImplementation = "elasticsearch";
+

Review Comment:
   Still looks strnge that the start command is responsible of choosing the 
persistence implem, I still think this should be handled by configuration.



##########
tools/shell-dev-commands/src/main/java/org/apache/unomi/shell/commands/ConditionRemove.java:
##########
@@ -0,0 +1,45 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.unomi.shell.commands;
+
+import org.apache.karaf.shell.api.action.Argument;
+import org.apache.karaf.shell.api.action.Command;
+import org.apache.karaf.shell.api.action.lifecycle.Reference;
+import org.apache.karaf.shell.api.action.lifecycle.Service;
+import org.apache.unomi.api.services.DefinitionsService;
+
+@Command(scope = "unomi", name = "condition-remove", description = "This 
command will remove an condition type.")
+@Service
+public class ConditionRemove extends RemoveCommandSupport {

Review Comment:
   Nice addition, but not related to open search support ?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@unomi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to