fgerlits commented on code in PR #2044:
URL: https://github.com/apache/nifi-minifi-cpp/pull/2044#discussion_r2444977857


##########
.github/workflows/verify-package.yml:
##########
@@ -94,17 +134,14 @@ jobs:
       - run: |
           cd build && cmake ${DOCKER_CMAKE_FLAGS} ..
           VERIFY_CMD="${{ matrix.platform.verify_cmd }}"
-          if [[ "${{ inputs.enable_fips }}" == "true" ]]; then
-            VERIFY_CMD="${VERIFY_CMD}-fips"
-          fi
           ${{ matrix.platform.build_cmd }} && $VERIFY_CMD
 
       - name: Test Reporter
         if: always()
         uses: 
phoenix-actions/test-reporting@f957cd93fc2d848d556fa0d03c57bc79127b6b5e  # v15
         with:
           name: "${{ matrix.platform.name }} (${{ matrix.arch }})${{ 
inputs.enable_fips && ' (FIPS Mode)' || '' }}"
-          path: build/behavex_output/behave/*.xml
+          path: build/behavex_output_2/behave/*.xml

Review Comment:
   Can we change `_2` to `_modular` or something similar?



##########
docker/RunBehaveTests.sh:
##########
@@ -0,0 +1,200 @@
+#!/bin/bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the \"License\"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an \"AS IS\" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+set -e
+
+die()
+{
+  local _ret="${2:-1}"
+  test "${_PRINT_HELP:-no}" = yes && print_help >&2
+  echo "$1" >&2
+  exit "${_ret}"
+}
+
+_positionals=()
+_arg_image_tag_prefix=
+_arg_tags_to_exclude=
+_arg_parallel_processes=3
+
+print_help()
+{
+  printf '%s\n' "Runs the provided behave tests in a containerized environment"
+  printf 'Usage: %s [--image-tag-prefix <arg>] [-h|--help] <minifi_version> 
<tags_to_run> ...\n' "$0"
+  printf '\t%s\n' "<minifi_version>: the version of minifi"
+  printf '\t%s\n' "<tags_to_run>: include these tags"
+  printf '\t%s\n' "--tags_to_exclude: optional tags that should be skipped (no 
default)"
+  printf '\t%s\n' "--image-tag-prefix: optional prefix to the docker tag (no 
default)"
+  printf '\t%s\n' "--parallel_processes: optional argument that specifies the 
number of parallel processes that can be executed simultaneously. (default: 3)"
+  printf '\t%s\n' "--fips: enables FIPS mode by default"
+  printf '\t%s\n' "-h, --help: Prints help"
+}
+
+
+parse_commandline()
+{
+  _positionals_count=0
+  _arg_fips=false  # Default to false
+  while test $# -gt 0
+  do
+    _key="$1"
+    case "$_key" in
+      --image-tag-prefix)
+        test $# -lt 2 && die "Missing value for the optional argument 
'$_key'." 1
+        _arg_image_tag_prefix="$2"
+        shift
+        ;;
+      --image-tag-prefix=*)
+        _arg_image_tag_prefix="${_key##--image-tag-prefix=}"
+        ;;
+      --tags_to_exclude)
+        test $# -lt 2 && die "Missing value for the optional argument 
'$_key'." 1
+        _arg_tags_to_exclude="$2"
+        shift
+        ;;
+      --tags_to_exclude=*)
+        _arg_tags_to_exclude="${_key##--tags_to_exclude=}"
+        ;;
+      --parallel_processes)
+        test $# -lt 2 && die "Missing value for the optional argument 
'$_key'." 1
+        _arg_parallel_processes="$2"
+        shift
+        ;;
+      --parallel_processes=*)
+        _arg_parallel_processes="${_key##--parallel_processes=}"
+        ;;
+      --fips)
+        _arg_fips=true  # Set boolean flag to true when argument is present
+        ;;
+      -h|--help)
+        print_help
+        exit 0
+        ;;
+      -h*)
+        print_help
+        exit 0
+        ;;
+      *)
+        _last_positional="$1"
+        _positionals+=("$_last_positional")
+        _positionals_count=$((_positionals_count + 1))
+        ;;
+    esac
+    shift
+  done
+}
+
+
+
+handle_passed_args_count()
+{
+  local _required_args_string="'minifi_version' and 'tags_to_run'"
+  test "${_positionals_count}" -ge 2 || _PRINT_HELP=yes die "FATAL ERROR: Not 
enough positional arguments - we require at least 2 (namely: 
$_required_args_string), but got only ${_positionals_count}." 1
+}
+
+
+assign_positional_args()
+{
+  local _positional_name _shift_for=$1
+  _positional_names="_arg_minifi_version _arg_tags_to_run "
+
+  shift "$_shift_for"
+  for _positional_name in ${_positional_names}
+  do
+    test $# -gt 0 || break
+    eval "$_positional_name=\${1}" || die "Error during argument parsing." 1
+    shift
+  done
+}
+
+parse_commandline "$@"
+handle_passed_args_count
+assign_positional_args 1 "${_positionals[@]}"
+
+docker_dir="$( cd "${0%/*}" && pwd )"
+
+# shellcheck disable=SC2154
+export MINIFI_VERSION=${_arg_minifi_version}
+if test -z "$_arg_image_tag_prefix"
+then
+  export MINIFI_TAG_PREFIX=""
+else
+  export MINIFI_TAG_PREFIX=${_arg_image_tag_prefix}-
+fi
+
+  if [ "$_arg_fips" = true ]; then
+    export MINIFI_FIPS="true"
+  else
+    export MINIFI_FIPS="false"
+  fi
+
+# Create virtual environment for testing
+if [[ ! -d ./behave_venv ]]; then
+  echo "Creating virtual environment in ./behave_venv" 1>&2
+  virtualenv --python=python3 ./behave_venv
+fi
+
+echo "Activating virtual environment..." 1>&2
+# shellcheck disable=SC1091
+. ./behave_venv/bin/activate
+pip install --trusted-host pypi.python.org --upgrade pip setuptools
+
+# Install test dependencies
+echo "Installing test dependencies..." 1>&2
+
+# hint include/library paths if homewbrew is in use
+if brew list 2> /dev/null | grep openssl > /dev/null 2>&1; then
+  echo "Using homebrew paths for openssl" 1>&2
+  LDFLAGS="-L$(brew --prefix [email protected])/lib"
+  export LDFLAGS
+  CFLAGS="-I$(brew --prefix [email protected])/include"
+  export CFLAGS
+  SWIG_FEATURES="-cpperraswarn -includeall -I$(brew --prefix 
[email protected])/include"
+  export SWIG_FEATURES
+fi
+
+if ! command swig -version &> /dev/null; then
+  echo "Swig could not be found on your system (dependency of m2crypto python 
library). Please install swig to continue."
+  exit 1
+fi
+
+pip install -e "${docker_dir}/../behave_framework"
+
+export TMPDIR="/tmp/behavex_ci_${RANDOM}"
+mkdir -p "$TMPDIR"
+
+export TEMP="$TMPDIR/temp"
+export LOGS="$TMPDIR/logs"
+
+BEHAVE_OPTS=(--show-progress-bar --logging-level DEBUG --parallel-processes 
"${_arg_parallel_processes}" --parallel-scheme feature -o 
"${PWD}/behavex_output_2" -t "${_arg_tags_to_run}")

Review Comment:
   Do we need `DEBUG` logs? This used to be `INFO` in `DockerVerify.sh`.



##########
behave_framework/src/minifi_test_framework/containers/container.py:
##########
@@ -0,0 +1,266 @@
+#
+#  Licensed to the Apache Software Foundation (ASF) under one or more
+#  contributor license agreements.  See the NOTICE file distributed with
+#  this work for additional information regarding copyright ownership.
+#  The ASF licenses this file to You under the Apache License, Version 2.0
+#  (the "License"); you may not use this file except in compliance with
+#  the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+#
+
+import logging
+import os
+import shlex
+import tempfile
+from docker.models.networks import Network
+from minifi_test_framework.containers.directory import Directory
+from minifi_test_framework.containers.file import File
+from minifi_test_framework.containers.host_file import HostFile
+from typing import Dict, Any, List
+
+import docker
+
+
+class Container:
+    def __init__(self, image_name: str, container_name: str, network: Network):
+        self.image_name: str = image_name
+        self.container_name: str = container_name
+        self.network: Network = network
+        self.user: str = "0:0"
+        self.client = docker.from_env()
+        self.container = None
+        self.files: List[File] = []
+        self.dirs: List[Directory] = []
+        self.host_files: List[HostFile] = []
+        self.volumes = {}
+        self.command = None
+        self._temp_dir = None
+        self.ports = None
+        self.environment: List[str] = []
+
+    def deploy(self) -> bool:
+        self._temp_dir = tempfile.TemporaryDirectory()
+
+        if len(self.files) != 0 or len(self.dirs) != 0 or len(self.host_files) 
!= 0:
+            for file in self.files:
+                temp_path = self._temp_dir.name + "/" + file.host_filename
+                with open(temp_path, "w") as temp_file:
+                    temp_file.write(file.content)
+                self.volumes[temp_path] = {
+                    "bind": file.path + "/" + file.host_filename,
+                    "mode": file.mode
+                }
+            for directory in self.dirs:
+                temp_path = self._temp_dir.name + directory.path
+                for file_name, content in directory.files.items():
+                    file_path = temp_path + "/" + file_name
+                    os.makedirs(temp_path, exist_ok=True)
+                    with open(file_path, "w") as temp_file:
+                        temp_file.write(content)
+                self.volumes[temp_path] = {
+                    "bind": directory.path,
+                    "mode": directory.mode
+                }
+            for host_file in self.host_files:
+                self.volumes[host_file.container_path] = {
+                    "bind": host_file.host_path,
+                    "mode": host_file.mode
+                }
+
+        try:
+            existing_container = 
self.client.containers.get(self.container_name)
+            logging.warning(f"Found existing container 
'{self.container_name}'. Removing it first.")
+            existing_container.remove(force=True)
+        except docker.errors.NotFound:
+            pass
+        try:
+            print(f"Creating and starting container 
'{self.container_name}'...")
+            self.container = self.client.containers.run(
+                image=self.image_name,
+                name=self.container_name,
+                ports=self.ports,
+                environment=self.environment,
+                volumes=self.volumes,
+                network=self.network.name,
+                command=self.command,
+                user=self.user,
+                detach=True  # Always run in the background
+            )
+        except Exception as e:
+            logging.error(f"Error starting container: {e}")
+            raise
+        return True
+
+    def clean_up(self):
+        if self.container is not None:
+            self.container.remove(force=True)
+
+    def exec_run(self, command):
+        if self.container:
+            (code, output) = self.container.exec_run(command)
+            return code, output.decode("utf-8")
+        return None, "Container not running."
+
+    def directory_contains_file_with_content(self, directory_path: str, 
expected_content: str) -> bool:
+        if not self.container:
+            return False
+
+        escaped_content = expected_content.replace("\"", "\\\"")
+
+        command = f"sh -c \"grep -l -F -- '{escaped_content}' 
{directory_path}/*\""

Review Comment:
   we could check if the directory is empty (and return False if it is) to 
avoid the "No such file or directory" messages from grep



##########
behave_framework/src/minifi_test_framework/containers/container.py:
##########
@@ -0,0 +1,266 @@
+#
+#  Licensed to the Apache Software Foundation (ASF) under one or more
+#  contributor license agreements.  See the NOTICE file distributed with
+#  this work for additional information regarding copyright ownership.
+#  The ASF licenses this file to You under the Apache License, Version 2.0
+#  (the "License"); you may not use this file except in compliance with
+#  the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+#
+
+import logging
+import os
+import shlex
+import tempfile
+from docker.models.networks import Network
+from minifi_test_framework.containers.directory import Directory
+from minifi_test_framework.containers.file import File
+from minifi_test_framework.containers.host_file import HostFile
+from typing import Dict, Any, List
+
+import docker
+
+
+class Container:
+    def __init__(self, image_name: str, container_name: str, network: Network):
+        self.image_name: str = image_name
+        self.container_name: str = container_name
+        self.network: Network = network
+        self.user: str = "0:0"
+        self.client = docker.from_env()
+        self.container = None
+        self.files: List[File] = []
+        self.dirs: List[Directory] = []
+        self.host_files: List[HostFile] = []
+        self.volumes = {}
+        self.command = None
+        self._temp_dir = None
+        self.ports = None
+        self.environment: List[str] = []
+
+    def deploy(self) -> bool:
+        self._temp_dir = tempfile.TemporaryDirectory()
+
+        if len(self.files) != 0 or len(self.dirs) != 0 or len(self.host_files) 
!= 0:
+            for file in self.files:
+                temp_path = self._temp_dir.name + "/" + file.host_filename
+                with open(temp_path, "w") as temp_file:
+                    temp_file.write(file.content)
+                self.volumes[temp_path] = {
+                    "bind": file.path + "/" + file.host_filename,
+                    "mode": file.mode
+                }
+            for directory in self.dirs:
+                temp_path = self._temp_dir.name + directory.path
+                for file_name, content in directory.files.items():
+                    file_path = temp_path + "/" + file_name
+                    os.makedirs(temp_path, exist_ok=True)

Review Comment:
   the `makedirs` could be outside the loop



##########
extensions/aws/tests/features/steps/s3_server_container.py:
##########
@@ -0,0 +1,62 @@
+import json
+import logging
+
+from minifi_test_framework.containers.container import Container
+from minifi_test_framework.core.helpers import wait_for_condition
+from minifi_test_framework.core.minifi_test_context import MinifiTestContext
+
+
+class S3ServerContainer(Container):
+    def __init__(self, test_context: MinifiTestContext):
+        super().__init__("adobe/s3mock:3.12.0", 
f"s3-server-{test_context.scenario_id}", test_context.network)
+        self.environment.append("initialBuckets=test_bucket")
+
+    def deploy(self):
+        super().deploy()
+        finished_str = "Started S3MockApplication"
+        return wait_for_condition(
+            condition=lambda: finished_str in self.get_logs(),
+            timeout_seconds=15,
+            bail_condition=lambda: self.exited,
+            context=None)
+
+    def check_kinesis_server_record_data(self, container_name, record_data):

Review Comment:
   These checks used to have a `@retry_check()` annotation; is that no longer 
needed?



##########
behave_framework/src/minifi_test_framework/containers/container.py:
##########
@@ -0,0 +1,266 @@
+#
+#  Licensed to the Apache Software Foundation (ASF) under one or more
+#  contributor license agreements.  See the NOTICE file distributed with
+#  this work for additional information regarding copyright ownership.
+#  The ASF licenses this file to You under the Apache License, Version 2.0
+#  (the "License"); you may not use this file except in compliance with
+#  the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+#
+
+import logging
+import os
+import shlex
+import tempfile
+from docker.models.networks import Network
+from minifi_test_framework.containers.directory import Directory
+from minifi_test_framework.containers.file import File
+from minifi_test_framework.containers.host_file import HostFile
+from typing import Dict, Any, List
+
+import docker
+
+
+class Container:
+    def __init__(self, image_name: str, container_name: str, network: Network):
+        self.image_name: str = image_name
+        self.container_name: str = container_name
+        self.network: Network = network
+        self.user: str = "0:0"
+        self.client = docker.from_env()
+        self.container = None
+        self.files: List[File] = []
+        self.dirs: List[Directory] = []
+        self.host_files: List[HostFile] = []
+        self.volumes = {}
+        self.command = None
+        self._temp_dir = None
+        self.ports = None
+        self.environment: List[str] = []
+
+    def deploy(self) -> bool:
+        self._temp_dir = tempfile.TemporaryDirectory()
+
+        if len(self.files) != 0 or len(self.dirs) != 0 or len(self.host_files) 
!= 0:
+            for file in self.files:
+                temp_path = self._temp_dir.name + "/" + file.host_filename
+                with open(temp_path, "w") as temp_file:
+                    temp_file.write(file.content)
+                self.volumes[temp_path] = {
+                    "bind": file.path + "/" + file.host_filename,
+                    "mode": file.mode
+                }
+            for directory in self.dirs:
+                temp_path = self._temp_dir.name + directory.path
+                for file_name, content in directory.files.items():
+                    file_path = temp_path + "/" + file_name
+                    os.makedirs(temp_path, exist_ok=True)
+                    with open(file_path, "w") as temp_file:
+                        temp_file.write(content)
+                self.volumes[temp_path] = {
+                    "bind": directory.path,
+                    "mode": directory.mode
+                }
+            for host_file in self.host_files:
+                self.volumes[host_file.container_path] = {
+                    "bind": host_file.host_path,
+                    "mode": host_file.mode
+                }
+
+        try:
+            existing_container = 
self.client.containers.get(self.container_name)
+            logging.warning(f"Found existing container 
'{self.container_name}'. Removing it first.")
+            existing_container.remove(force=True)
+        except docker.errors.NotFound:
+            pass
+        try:
+            print(f"Creating and starting container 
'{self.container_name}'...")
+            self.container = self.client.containers.run(
+                image=self.image_name,
+                name=self.container_name,
+                ports=self.ports,
+                environment=self.environment,
+                volumes=self.volumes,
+                network=self.network.name,
+                command=self.command,
+                user=self.user,
+                detach=True  # Always run in the background
+            )
+        except Exception as e:
+            logging.error(f"Error starting container: {e}")
+            raise
+        return True
+
+    def clean_up(self):
+        if self.container is not None:
+            self.container.remove(force=True)
+
+    def exec_run(self, command):
+        if self.container:
+            (code, output) = self.container.exec_run(command)
+            return code, output.decode("utf-8")
+        return None, "Container not running."
+
+    def directory_contains_file_with_content(self, directory_path: str, 
expected_content: str) -> bool:
+        if not self.container:
+            return False
+
+        escaped_content = expected_content.replace("\"", "\\\"")
+
+        command = f"sh -c \"grep -l -F -- '{escaped_content}' 
{directory_path}/*\""

Review Comment:
   It would be better to use `shlex.quote` (which I didn't know, but you use it 
below) in all functions in this file. (Eg. what happens if `expected_content` 
contains a single quote?)
   
   untested suggestion:
   ```suggestion
           quoted_content = shlex.quote(expected_content)
   
           command = "sh -c {}".format(shlex.quote(f"grep -l -F -- 
{quoted_content} {directory_path}/*")
   ```



##########
behave_framework/src/minifi_test_framework/containers/minifi_container.py:
##########
@@ -0,0 +1,85 @@
+#
+#  Licensed to the Apache Software Foundation (ASF) under one or more
+#  contributor license agreements.  See the NOTICE file distributed with
+#  this work for additional information regarding copyright ownership.
+#  The ASF licenses this file to You under the Apache License, Version 2.0
+#  (the "License"); you may not use this file except in compliance with
+#  the License.  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+#
+
+from docker.models.networks import Network
+from minifi_test_framework.containers.file import File
+from minifi_test_framework.minifi.flow_definition import FlowDefinition
+from typing import Dict
+
+from .container import Container
+
+
+class MinifiContainer(Container):
+    def __init__(self, image_name: str, scenario_id: str, network: Network):
+        super().__init__(image_name, f"minifi-{scenario_id}", network)
+        self.flow_config_str: str = ""
+        self.flow_definition = FlowDefinition()
+        self.properties: Dict[str, str] = {}
+        self.log_properties: Dict[str, str] = {}
+
+        self.is_fhs = 'MINIFI_INSTALLATION_TYPE=FHS' in 
str(self.client.images.get(image_name).history())
+
+        self._fill_default_properties()
+        self._fill_default_log_properties()
+
+    def deploy(self) -> bool:
+        if self.is_fhs:
+            self.files.append(File("/etc/nifi-minifi-cpp", "config.yml", 
self.flow_definition.to_yaml()))
+            self.files.append(File("/etc/nifi-minifi-cpp", 
"minifi.properties", self._get_properties_file_content()))
+            self.files.append(
+                File("/etc/nifi-minifi-cpp", "minifi-log.properties", 
self._get_log_properties_file_content()))
+        else:
+            self.files.append(File("/opt/minifi/minifi-current/conf", 
"config.yml", self.flow_definition.to_yaml()))
+            self.files.append(
+                File("/opt/minifi/minifi-current/conf", "minifi.properties", 
self._get_properties_file_content()))
+            self.files.append(File("/opt/minifi/minifi-current/conf", 
"minifi-log.properties",
+                                   self._get_log_properties_file_content()))

Review Comment:
   is `minifi-uid.properties` not needed here?



##########
docker/test/integration/features/consume_kafka.feature:
##########


Review Comment:
   You have already converted some of these tests in 
`extensions/kafka/tests/features/consumekafka.feature`. I think the tests which 
are in `consumekafka.feature` should be removed from here.



##########
extensions/kafka/tests/features/consumekafka.feature:
##########


Review Comment:
   Is the `consume_kafka.feature` → `consumekafka.feature` name change 
intentional? If not, then I would prefer the old name, because it's easier to 
read.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to