This is an automated email from the ASF dual-hosted git repository.

mck pushed a commit to branch cassandra-5.0
in repository https://gitbox.apache.org/repos/asf/cassandra.git


The following commit(s) were added to refs/heads/cassandra-5.0 by this push:
     new f7c11bdcd4 Standalone Jenkinsfile
f7c11bdcd4 is described below

commit f7c11bdcd458b2eb0769a4b76698fb6382cdab3a
Author: Mick Semb Wever <[email protected]>
AuthorDate: Tue Jun 13 16:07:07 2023 +0200

    Standalone Jenkinsfile
    
     – ensure test file paths, and their suite names are unique (the latter was 
broken for cqlshlib and python dtests)
     – removal of properties and system-out in test xml happens in 
CassandraXMLJUnitResultFormatter
     – new jenkins agent labels and introduce agents sizes
     - ci_summary.html generation script, ref work submitted in 
apache/cassandra-builds#99
     – fix for stress-test and fqltool-test running on small
     - ant generate-test-report is limited to only running on individual test 
types (ci_parser.py provides ci_summary.html for the overview now)
     - each cell has a single retry, and the retry will happen on a different 
agent
     - on ci-cassandra the summary stage happens on the builtin, bc 
copyArtifacts on 15k+ files takes many hours otherwise
     - test-burn only needs two splits
     - dependency-check is disabled from the lint target until CASSANDRA-19213
     - add $DEBUG env var to in-tree scripts, turns on bash debug
     - fix FBUtilities' handling of gcp cos_containerd (kernel version comes 
with a trailing '+' character)
    
     patch by Aleks Volochnev, Mick Semb Wever; reviewed by Aleksandr 
Volochnev, Josh McKenzie, Maxim Muzafarov, Stefan Miklosovic for CASSANDRA-18594
    
    Co-authored-by: Aleksandr Volochnev <[email protected]>
    Co-authored-by: Mick Semb Wever <[email protected]>
    Co-authored-by: Josh McKenzie <[email protected]>
    Co-authored-by: Artem Chekunov <[email protected]>
---
 .build/README.md                                   |  122 +-
 .build/check-code.sh                               |    2 +-
 .build/ci/ci_parser.py                             |  307 +++++
 .build/ci/generate-ci-summary.sh                   |   60 +
 .../{check-code.sh => ci/generate-test-report.sh}  |   17 +-
 .build/ci/junit_helpers.py                         |  320 +++++
 .build/ci/logging.sh                               |  124 ++
 .build/ci/logging_helper.py                        |  136 +++
 .build/{check-code.sh => ci/precommit_check.sh}    |   41 +-
 .build/{check-code.sh => ci/requirements.txt}      |   15 +-
 .build/docker/_docker_run.sh                       |    8 +-
 .build/{check-code.sh => docker/build-jars.sh}     |   14 +-
 .build/docker/bullseye-build.docker                |    3 +
 .build/docker/run-tests.sh                         |   79 +-
 .build/run-python-dtests.sh                        |   21 +-
 .build/run-tests.sh                                |   12 +-
 .jenkins/Jenkinsfile                               | 1230 ++++++++------------
 CHANGES.txt                                        |    1 +
 build.xml                                          |   73 +-
 pylib/cassandra-cqlsh-tests.sh                     |    3 +-
 .../org/apache/cassandra/utils/FBUtilities.java    |   17 +-
 .../CassandraXMLJUnitResultFormatter.java          |   23 +-
 .../apache/cassandra/utils/FBUtilitiesTest.java    |    5 +-
 23 files changed, 1713 insertions(+), 920 deletions(-)

diff --git a/.build/README.md b/.build/README.md
index 75ccf6656a..72a974afc4 100644
--- a/.build/README.md
+++ b/.build/README.md
@@ -29,66 +29,6 @@ The following applies to all build scripts.
 
     build_dir=/tmp/cass_Mtu462n .build/docker/check-code.sh
 
-Running Sonar analysis (experimental)
--------------------------------------
-
-Run:
-
-    ant sonar
-
-Sonar analysis requires the SonarQube server to be available. If there
-is already some SonarQube server, it can be used by setting the
-following env variables:
-
-    SONAR_HOST_URL=http://sonar.example.com
-    SONAR_CASSANDRA_TOKEN=cassandra-project-analysis-token
-    SONAR_PROJECT_KEY=<key of the Cassandra project in SonarQube>
-
-If SonarQube server is not available, one can be started locally in
-a Docker container. The following command will create a SonarQube
-container and start the server:
-
-    ant sonar-create-server
-
-The server will be available at http://localhost:9000 with admin
-credentials admin/password. The Docker container named `sonarqube`
-is created and left running. When using this local SonarQube server,
-no env variables to configure url, token, or project key are needed,
-and the analysis can be run right away with `ant sonar`.
-
-After the analysis, the server remains running so that one can 
-inspect the results.
-
-To stop the local SonarQube server:
-
-    ant sonar-stop-server
-
-However, this command just stops the Docker container without removing
-it. It allows to start the container later with:
-
-    docker container start sonarqube
-
-and access previous analysis results. To drop the container, run:
-
-    docker container rm sonarqube
-
-When `SONAR_HOST_URL` is not provided, the script assumes a dedicated
-local instance of the SonarQube server and sets it up automatically,
-which includes creating a project, setting up the quality profile, and
-quality gate from the configuration stored in
-[sonar-quality-profile.xml](sonar%2Fsonar-quality-profile.xml) and
-[sonar-quality-gate.json](sonar%2Fsonar-quality-gate.json)
-respectively. To run the analysis with a custom quality profile, start
-the server using `ant sonar-create-server`, create a project manually,
-and set up a desired quality profile for it. Then, create the analysis
-token for the project and export the following env variables:
-
-    SONAR_HOST_URL="http://127.0.0.1:9000";
-    SONAR_CASSANDRA_TOKEN="<token>"
-    SONAR_PROJECT_KEY="<key of the Cassandra project in SonarQube>"
-
-The analysis can be run with `ant sonar`.
-
 
 Building Artifacts (tarball and maven)
 -------------------------------------
@@ -177,7 +117,7 @@ Running other types of tests with docker:
     .build/docker/run-tests.sh jvm-dtest-upgrade
     .build/docker/run-tests.sh dtest
     .build/docker/run-tests.sh dtest-novnode
-    .build/docker/run-tests.sh dtest-offheap
+    .build/docker/run-tests.sh dtest-latest
     .build/docker/run-tests.sh dtest-large
     .build/docker/run-tests.sh dtest-large-novnode
     .build/docker/run-tests.sh dtest-upgrade
@@ -198,3 +138,63 @@ Other python dtest types without docker:
 
     .build/run-python-dtests.sh dtest-upgrade-large
 
+
+Running Sonar analysis (experimental)
+-------------------------------------
+
+Run:
+
+    ant sonar
+
+Sonar analysis requires the SonarQube server to be available. If there
+is already some SonarQube server, it can be used by setting the
+following env variables:
+
+    SONAR_HOST_URL=http://sonar.example.com
+    SONAR_CASSANDRA_TOKEN=cassandra-project-analysis-token
+    SONAR_PROJECT_KEY=<key of the Cassandra project in SonarQube>
+
+If SonarQube server is not available, one can be started locally in
+a Docker container. The following command will create a SonarQube
+container and start the server:
+
+    ant sonar-create-server
+
+The server will be available at http://localhost:9000 with admin
+credentials admin/password. The Docker container named `sonarqube`
+is created and left running. When using this local SonarQube server,
+no env variables to configure url, token, or project key are needed,
+and the analysis can be run right away with `ant sonar`.
+
+After the analysis, the server remains running so that one can
+inspect the results.
+
+To stop the local SonarQube server:
+
+    ant sonar-stop-server
+
+However, this command just stops the Docker container without removing
+it. It allows to start the container later with:
+
+    docker container start sonarqube
+
+and access previous analysis results. To drop the container, run:
+
+    docker container rm sonarqube
+
+When `SONAR_HOST_URL` is not provided, the script assumes a dedicated
+local instance of the SonarQube server and sets it up automatically,
+which includes creating a project, setting up the quality profile, and
+quality gate from the configuration stored in
+[sonar-quality-profile.xml](sonar%2Fsonar-quality-profile.xml) and
+[sonar-quality-gate.json](sonar%2Fsonar-quality-gate.json)
+respectively. To run the analysis with a custom quality profile, start
+the server using `ant sonar-create-server`, create a project manually,
+and set up a desired quality profile for it. Then, create the analysis
+token for the project and export the following env variables:
+
+    SONAR_HOST_URL="http://127.0.0.1:9000";
+    SONAR_CASSANDRA_TOKEN="<token>"
+    SONAR_PROJECT_KEY="<key of the Cassandra project in SonarQube>"
+
+The analysis can be run with `ant sonar`.
diff --git a/.build/check-code.sh b/.build/check-code.sh
index d3baec45d9..60c96dc9ad 100755
--- a/.build/check-code.sh
+++ b/.build/check-code.sh
@@ -24,5 +24,5 @@ command -v ant >/dev/null 2>&1 || { echo >&2 "ant needs to be 
installed"; exit 1
 [ -f "${CASSANDRA_DIR}/build.xml" ] || { echo >&2 "${CASSANDRA_DIR}/build.xml 
must exist"; exit 1; }
 
 # execute
-ant -f "${CASSANDRA_DIR}/build.xml" check dependency-check
+ant -f "${CASSANDRA_DIR}/build.xml" check # dependency-check # FIXME 
dependency-check now requires NVD key downloaded first
 exit $?
diff --git a/.build/ci/ci_parser.py b/.build/ci/ci_parser.py
new file mode 100755
index 0000000000..1a6bdf13dc
--- /dev/null
+++ b/.build/ci/ci_parser.py
@@ -0,0 +1,307 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+"""
+Script to take an arbitrary root directory of subdirectories of junit output 
format and create a summary .html file
+with their results.
+"""
+
+import argparse
+import cProfile
+import pstats
+import os
+import shutil
+import xml.etree.ElementTree as ET
+from typing import Callable, Dict, Tuple, Type
+from pathlib import Path
+
+from junit_helpers import JUnitResultBuilder, JUnitTestCase, JUnitTestSuite, 
JUnitTestStatus, LOG_FILE_NAME
+from logging_helper import build_logger, mute_logging, CustomLogger
+
+try:
+    from bs4 import BeautifulSoup
+except ImportError:
+    print('bs4 not installed; make sure you have bs4 in your active python 
env.')
+    exit(1)
+
+
+parser = argparse.ArgumentParser(description="""
+Parses ci results provided ci output in input path and generates html
+results in specified output file. Expects an existing .html file to insert
+results into; this file will be backed up into a <file_name>.bak file in its
+local directory.
+""")
+parser.add_argument('--input', type=str, help='path to input files (recursive 
directory search for *.xml)')
+# TODO: Change this paradigm to a full "input dir translates into output 
file", where output file includes some uuid
+# We'll need full support for all job types, not just junit, which will also 
necessitate refactoring into some kind of
+# TestResultParser, of which JUnit would be one type. But there's a clear 
pattern here we can extract. Thinking checkstyle.
+parser.add_argument('--output', type=str, help='existing .html output file to 
append to')
+parser.add_argument('--mute', action='store_true', help='mutes stdout and only 
logs to log file')
+parser.add_argument('--profile', '-p', action='store_true', help='Enable perf 
profiling on operations')
+parser.add_argument('-v', '-d', '--verbose', '--debug', dest='debug', 
action='store_true', help='verbose log output')
+args = parser.parse_args()
+if args.input is None or args.output is None:
+    parser.print_help()
+    exit(1)
+
+logger = build_logger(LOG_FILE_NAME, args.debug)  # type: CustomLogger
+if args.mute:
+    mute_logging(logger)
+
+
+def main():
+    check_file_condition(lambda: os.path.exists(args.input), f'Cannot find 
{args.input}. Aborting.')
+
+    xml_files = [str(file) for file in Path(args.input).rglob('*.xml')]
+    check_file_condition(lambda: len(xml_files) != 0, f'Found 0 .xml files in 
path: {args.input}. Cannot proceed with .xml extraction.')
+    logger.info(f'Found {len(xml_files)} xml files under: {args.input}')
+
+    test_suites = process_xml_files(xml_files)
+
+    for suite in test_suites.values():
+        if suite.is_empty() and suite.file_count() == 0:
+            logger.warning(f'Have an empty test_suite: {suite.name()} that had 
no .xml files associated with it. Did the jobs run correctly and produce junit 
files? Check {suite.get_archive()} for test run command result details.')
+        elif suite.is_empty():
+            logger.warning(f'Got an unexpected empty test_suite: 
{suite.name()} with no .xml file parsing associated with it. Check 
{LOG_FILE_NAME}.log when run with -v for details.')
+
+    create_summary_file(test_suites, xml_files, args.output)
+
+
+def process_xml_files(xml_files: str) -> Dict[str, JUnitTestSuite]:
+    """
+    For a given input input_dir, will find all .xml files in that tree, 
extract files from them preserving input_dir structure
+    and parse out all found junit test results into the global test result 
containers.
+    :param xml_files: all .xml files under args.input_dir
+    """
+
+    test_suites = dict()  # type: Dict[str, JUnitTestSuite]
+    test_count = 0
+
+    for file in xml_files:
+        files, tests = process_xml_file(file, test_suites)
+        test_count += tests
+
+    logger.progress(f'Total junit file count: {len(xml_files)}')
+    logger.progress(f'Total suite count: {len(test_suites.keys())}')
+    logger.progress(f'Total test count: {test_count}')
+    passed = 0
+    failed = 0
+    skipped = 0
+
+    for suite in test_suites.values():
+        passed += suite.passed()
+        failed += suite.failed()
+        if suite.failed() != 0:
+            print_errors(suite)
+        skipped += suite.skipped()
+
+    logger.progress(f'-- Passed: {passed}')
+    logger.progress(f'-- Failed: {failed}')
+    logger.progress(f'-- Skipped: {skipped}')
+    return test_suites
+
+
+def process_xml_file(xml_file, test_suites: Dict[str, JUnitTestSuite]) -> 
Tuple[int, int]:
+    """
+    Pretty straightforward here - walk through and look for tests,
+    parsing them out into our global JUnitTestCase Dicts as we find them
+
+    No thread safety on target Dict -> relying on the "one .xml per suite" 
rule to keep things clean
+
+    Can be called in context of executor thread.
+    :return: Tuple[file count, test count]
+    """
+
+    # TODO: In extreme cases (python upgrade dtests), this could theoretically 
be a HUGE file we're materializing in memory. Consider .iterparse or tag 
sanitization using sed first.
+    with open(xml_file, "rb") as xml_input:
+        try:
+            suite_name = "?"
+            root = ET.parse(xml_input).getroot()  # type: ignore
+            suite_name = str(root.get('name'))
+            logger.progress(f'Processing archive: {xml_file} for test suite: 
{suite_name}')
+
+            # And make sure we're not racing
+            if suite_name in test_suites:
+                logger.error(f'Got a duplicate suite_name - this will lead to 
race conditions. Suite: {suite_name}. xml file: {xml_file}. Skipping this 
file.')
+                return 0, 0
+            else:
+                test_suites[suite_name] = JUnitTestSuite(suite_name)
+
+            active_suite = test_suites[suite_name]
+            # Store this for later logging if we have a failed job; help the 
user know where to look next.
+            active_suite.set_archive(xml_file)
+            test_file_count = 0
+            test_count = 0
+            fc = process_test_cases(active_suite, xml_file, root)
+            if fc != 0:
+                test_file_count += 1
+                test_count += fc
+        except (EOFError, ET.ParseError) as e:
+            logger.error(f'Error on {xml_file}: {e}. Skipping; will be missing 
results for {suite_name}')
+            return 0, 0
+        except Exception as e:
+            logger.critical(f'Got unexpected error while parsing {xml_file}: 
{e}. Aborting.')
+            raise e
+    return test_file_count, test_count
+
+
+def print_errors(suite: JUnitTestSuite) -> None:
+    logger.warning(f'\n[Printing {suite.failed()} tests from suite: 
{suite.name()}]')
+    for testcase in suite.get_tests(JUnitTestStatus.FAILURE):
+        logger.warning(f'{testcase}')
+
+
+def process_test_cases(suite: JUnitTestSuite, file_name: str, root) -> int:
+    """
+    For a given input .xml, will extract all JUnitTestCase matching objects 
and store them in the global registry keyed off
+    suite name.
+
+    Can be called in context of executor thread.
+    :param suite: The JUnitTestSuite object we're currently working with
+    :param file_name: .xml file_name to check for tests. junit format.
+    :param root: etree root for file_name
+    :return : count of tests extracted from this file_name
+    """
+    xml_exclusions = ['logback', 'checkstyle']
+    if any(x in file_name for x in xml_exclusions):
+        return 0
+
+    # Search inside entire hierarchy since sometimes it's at the root and 
sometimes one level down.
+    test_count = len(root.findall('.//testcase'))
+    if test_count == 0:
+        logger.warning(f'Appear to be processing an .xml file without any 
junit tests in it: {file_name}. Update .xml exclusions to exclude this.')
+        if args.debug:
+            logger.info(ET.tostring(root))
+        return 0
+
+    suite.add_file(file_name)
+    found = 0
+    for testcase in root.iter('testcase'):
+        processed = JUnitTestCase(testcase)
+        suite.add_testcase(processed)
+        found = 1
+    if found == 0:
+        logger.error(f'file: {file_name} has test_count: {test_count} but 
root.iter iterated across nothing!')
+        logger.error(ET.tostring(root))
+    return test_count
+
+
+# TODO: Update this to instead be "create_summary_file" and build the entire 
summary page, not just append failures to existing
+# This should be trivial to do using JUnitTestSuite.failed, passed, etc methods
+def create_summary_file(test_suites: Dict[str, JUnitTestSuite], xml_files, 
output: str) -> None:
+    """
+    Will create a table with all failed tests in it organized by sorted suite 
name.
+    :param test_suites: Collection of JUnitTestSuite's parsed out pass/fail 
data
+    :param output: Path to the .html we want to append to the <body> of
+    """
+
+    # if needed create a blank ci_summary.html
+    if not os.path.exists(args.output):
+        with open(args.output, "w") as ci_summary_html:
+            ci_summary_html.write('<html><head><body><h1>CI 
Summary</h1></body></head></html>')
+
+    with open(output, 'r') as file:
+        soup = BeautifulSoup(file, 'html.parser')
+
+    failures_tag = soup.new_tag("div")
+    failures_tag.string = '<br/><br/>[Test Failure Details]<br/><br/>'
+    suites_tag = soup.new_tag("div")
+    suites_tag.string = '<br/><br/><hr/>[Test Suite Details]<br/><br/>'
+    suites_builder = JUnitResultBuilder('Suites')
+    suites_builder.label_columns(['Suite', 'Passed', 'Failed', 'Skipped'], 
["width: 70%; text-align: left;", "width: 10%; text-align: right", "width: 10%; 
text-align: right", "width: 10%; text-align: right"])
+
+    JUnitResultBuilder.add_style_tags(soup)
+
+    # We cut off at 200 failures; if you have > than that chances are you have 
a bad run and there's no point in
+    # just continuing to pollute the summary file with it and blow past file 
size. Since the inlined failures are
+    # a tool to be used in the attaching / review process and not primarily 
workflow and fixing.
+    total_passed_count = 0
+    total_skipped_count = 0
+    total_failure_count = 0
+    for suite_name in sorted(test_suites.keys()):
+        suite = test_suites[suite_name]
+        passed_count = suite.passed()
+        skipped_count = suite.skipped()
+        failure_count = suite.failed()
+
+        suites_builder.add_row([suite_name, str(passed_count), 
str(failure_count), str(skipped_count)])
+
+        if failure_count == 0:
+            # Don't append anything to results in the happy path case.
+            logger.debug(f'No failed tests in suite: {suite_name}')
+        elif total_failure_count < 200:
+            # Else independent table per suite.
+            failures_builder = JUnitResultBuilder(suite_name)
+            failures_builder.label_columns(['Class', 'Method', 'Output', 
'Duration'], ["width: 15%; text-align: left;", "width: 15%; text-align: left;", 
"width: 60%; text-align: left;", "width: 10%; text-align: right;"])
+            for test in suite.get_tests(JUnitTestStatus.FAILURE):
+                failures_builder.add_row(test.row_data())
+            failures_tag.append(BeautifulSoup(failures_builder.build_table(), 
'html.parser'))
+            total_failure_count += failure_count
+            if total_failure_count > 200:
+                logger.critical(f'Saw {total_failure_count} failures; greater 
than 200 threshold. Not appending further failure details to {output}.')
+        total_passed_count += passed_count
+        total_skipped_count += skipped_count
+
+    # totals, manual html
+    totals_tag = soup.new_tag("div")
+    totals_tag.string = f"""[Totals]<br/><br/><table style="width:100px">
+        <tr><td >Passed</td><td></td><td align="right"> 
{total_passed_count}</td></tr>
+        <tr><td >Failed</td><td></td><td align="right"> 
{total_failure_count}</td></tr>
+        <tr><td >Skipped</td><td></td><td align="right"> 
{total_skipped_count}</td></tr>
+        <tr><td >Total</td><td>&nbsp;&nbsp;&nbsp;</td><td align="right"> 
{total_passed_count + total_failure_count + total_skipped_count}</td></tr>
+        <tr><td >Files</td><td></td><td align="right"> 
{len(xml_files)}</td></tr>
+        <tr><td >Suites</td><td></td><td align="right"> 
{len(test_suites.keys())}</td></tr></table><hr/>
+        """
+
+    soup.body.append(totals_tag)
+    soup.body.append(failures_tag)
+    suites_tag.append(BeautifulSoup(suites_builder.build_table(), 
'html.parser'))
+    soup.body.append(suites_tag)
+
+    # Only backup the output file if we've gotten this far
+    shutil.copyfile(output, output + '.bak')
+
+    # We write w/formatter set to None as invalid char above our insertion in 
the input file we're modifying (from other
+    # tests, test output, etc) can cause the parser to get very confused and 
do Bad Things.
+    with open(output, 'w') as file:
+        file.write(soup.prettify(formatter=None))
+    logger.progress(f'Test failure details appended to file: {output}')
+
+
+def check_file_condition(function: Callable[[], bool], msg: str) -> None:
+    """
+    Specifically raises a FileNotFoundError if something's wrong with the 
Callable
+    """
+    if not function():
+        log_and_raise(msg, FileNotFoundError)
+
+
+def log_and_raise(msg: str, error_type: Type[BaseException]) -> None:
+    logger.critical(msg)
+    raise error_type(msg)
+
+
+if __name__ == "__main__" and args.profile:
+    profiler = cProfile.Profile()
+    profiler.enable()
+    main()
+    profiler.disable()
+    stats = pstats.Stats(profiler).sort_stats('cumulative')
+    stats.print_stats()
+else:
+    main()
diff --git a/.build/ci/generate-ci-summary.sh b/.build/ci/generate-ci-summary.sh
new file mode 100755
index 0000000000..4674966743
--- /dev/null
+++ b/.build/ci/generate-ci-summary.sh
@@ -0,0 +1,60 @@
+#!/bin/sh -e
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#
+# Creates ci_summary.html
+# This expects a folder hierarchy that separates test types/targets.
+#  For example:
+#    build/test/output/test
+#    build/test/output/jvm-dtest
+#    build/test/output/dtest
+#
+# The ci_summary.html file, along with the results_details.tar.xz,
+#  are the sharable artefacts used to satisfy pre-commit CI from a private CI.
+#
+
+# variables, with defaults
+[ "x${CASSANDRA_DIR}" != "x" ] || CASSANDRA_DIR="$(readlink -f $(dirname 
"$0")/../..)"
+[ "x${DIST_DIR}" != "x" ] || DIST_DIR="${CASSANDRA_DIR}/build"
+
+# pre-conditions
+command -v ant >/dev/null 2>&1 || { echo >&2 "ant needs to be installed"; exit 
1; }
+[ -d "${CASSANDRA_DIR}" ] || { echo >&2 "Directory ${CASSANDRA_DIR} must 
exist"; exit 1; }
+[ -f "${CASSANDRA_DIR}/build.xml" ] || { echo >&2 "${CASSANDRA_DIR}/build.xml 
must exist"; exit 1; }
+[ -d "${DIST_DIR}" ] || { mkdir -p "${DIST_DIR}" ; }
+
+# generate CI summary file
+cd ${DIST_DIR}/
+
+cat >${DIST_DIR}/ci_summary.html <<EOL
+<html>
+<head></head>
+<body>
+<h1>CI Summary</h1>
+<h2>sha:  $(git ls-files -s ${CASSANDRA_DIR} | git hash-object --stdin)</h2>
+<h2>branch: $(git -C ${CASSANDRA_DIR} branch --remote --verbose --no-abbrev 
--contains | sed -rne 's/^[^\/]*\/([^\ ]+).*$/\1/p')</h2>
+<h2>repo: $(git -C ${CASSANDRA_DIR} remote get-url origin)</h2>
+<h2>Date: $(date)</h2>
+</body>
+</html>
+...
+EOL
+
+${CASSANDRA_DIR}/.build/ci/ci_parser.py --mute --input 
${DIST_DIR}/test/output/ --output ${DIST_DIR}/ci_summary.html
+
+exit $?
+
diff --git a/.build/check-code.sh b/.build/ci/generate-test-report.sh
similarity index 62%
copy from .build/check-code.sh
copy to .build/ci/generate-test-report.sh
index d3baec45d9..00672ecea5 100755
--- a/.build/check-code.sh
+++ b/.build/ci/generate-test-report.sh
@@ -15,14 +15,25 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+#
+# Aggregates all test xml files into one and generates the junit html report.
+#  see the 'generate-test-report' target in build.xml for more.
+# It is intended to be used to aggregate all splits on each test type/target,
+#  before calling generate-ci-summary.sh to create the overview summary of
+#  all test types and failures in a pipeline run.
+#
+
 # variables, with defaults
-[ "x${CASSANDRA_DIR}" != "x" ] || { CASSANDRA_DIR="$(dirname "$0")/.."; }
+[ "x${CASSANDRA_DIR}" != "x" ] || CASSANDRA_DIR="$(readlink -f $(dirname 
"$0")/../..)"
+[ "x${DIST_DIR}" != "x" ] || DIST_DIR="${CASSANDRA_DIR}/build"
 
 # pre-conditions
 command -v ant >/dev/null 2>&1 || { echo >&2 "ant needs to be installed"; exit 
1; }
 [ -d "${CASSANDRA_DIR}" ] || { echo >&2 "Directory ${CASSANDRA_DIR} must 
exist"; exit 1; }
 [ -f "${CASSANDRA_DIR}/build.xml" ] || { echo >&2 "${CASSANDRA_DIR}/build.xml 
must exist"; exit 1; }
+[ -d "${DIST_DIR}" ] || { mkdir -p "${DIST_DIR}" ; }
 
-# execute
-ant -f "${CASSANDRA_DIR}/build.xml" check dependency-check
+# generate test xml summary file and html report directories
+ant -f "${CASSANDRA_DIR}/build.xml" generate-test-report
 exit $?
+
diff --git a/.build/ci/junit_helpers.py b/.build/ci/junit_helpers.py
new file mode 100644
index 0000000000..d7d3702e99
--- /dev/null
+++ b/.build/ci/junit_helpers.py
@@ -0,0 +1,320 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+"""
+In-memory representations of JUnit test results and some helper methods to 
construct .html output
+based on those results
+"""
+
+import logging
+import xml.etree.ElementTree as ET
+
+from bs4 import BeautifulSoup
+from enum import Enum
+from jinja2 import Template
+from typing import Any, Dict, Iterable, List, Set, Tuple
+
+LOG_FILE_NAME = 'junit_parsing'
+logger = logging.getLogger(LOG_FILE_NAME)
+
+
+class JUnitTestStatus(Enum):
+    label: str
+
+    """
+    Map to the string tag expected in the child element in junit output
+    """
+    UNKNOWN = (0, 'unknown')
+    PASSED = (1, 'passed')
+    FAILURE = (2, 'failure')
+    SKIPPED = (3, 'skipped')
+    # Error and FAILURE are unfortunately used interchangeably by some of our 
suites, so we combine them on parsing
+    ERROR = (4, 'error')
+
+    def __new__(cls, value, label) -> Any:
+        obj = object.__new__(cls)
+        obj._value_ = value
+        obj.label = label
+        return obj
+
+    def __str__(self):
+        return self.label
+
+    @staticmethod
+    def html_cell_order() -> Tuple:
+        """
+        We won't have UNKNOWN, and ERROR is merged into FAILURE. This is our 
preferred order to represent things in html.
+        """
+        return JUnitTestStatus.FAILURE, JUnitTestStatus.PASSED, 
JUnitTestStatus.SKIPPED
+
+
+class JUnitResultBuilder:
+    """
+    Wraps up jinja templating for our junit based results. Manually doing this 
stuff was proving to be a total headache.
+    That said, this didn't turn out to be a picnic on its own either. The 
particularity of .html parsing and this
+    templating combined with BeautifulSoup means things are... very very 
particular. Bad parsing on things from the .sh
+    or other sources can make bs4 replace things in weird ways.
+    """
+    def __init__(self, name: str) -> None:
+        self._name = name
+        self._labels = []  # type: List[str]
+        self._column_styles = []  # type: List[str]
+        self._rows = []  # type: List[List[str]]
+        self._header = ['unknown', 'unknown', 'unknown', 'unknown']
+
+        # Have to have the 4 <th> members since the stylesheet formats them 
based on position and it'll get all stupid
+        # otherwise.
+        self._template = Template('''
+        <table class="table-fixed">
+          <tr style = "height: 18px;">
+            <th colspan="4"> {{header}} </th>
+          </tr>
+          <tr>
+            <th style="{{ column_styles[0] }}">{{ labels[0] }}</td>
+            <th style="{{ column_styles[1] }}">{{ labels[1] }}</td>
+            <th style="{{ column_styles[2] }}">{{ labels[2] }}</td>
+            <th style="{{ column_styles[3] }}">{{ labels[3] }}</td>
+          </tr>
+          {% for row in rows %}
+          <tr>
+            <td style="{{ column_styles[0] }}">{{ row[0] }}</td>
+            <td style="{{ column_styles[1] }}">{{ row[1] }}</td>
+            <td style="{{ column_styles[2] }}">{{ row[2] }}</td>
+            <td style="{{ column_styles[3] }}">{{ row[3] }}</td>
+          </tr>
+          {% endfor %}
+        </table>
+        ''')
+
+    @staticmethod
+    def add_style_tags(soup: BeautifulSoup) -> None:
+        """
+        We want to be opinionated about the width of our tables for our test 
suites; the messages dominate the output
+        so we want to dedicate the largest amount of space to them and limit 
word-wrapping
+        """
+        style_tag = soup.new_tag("style")
+        style_tag.string = """
+        table, tr {
+            border: 1px solid black; border-collapse: collapse;
+        }
+        .table-fixed {
+            table-layout: fixed;
+            width: 100%;
+        }
+        """
+        soup.head.append(style_tag)
+
+    def label_columns(self, cols: List[str], column_styles: List[str]) -> None:
+        if len(cols) != 4:
+            raise AssertionError(f'Got invalid number of columns on 
label_columns: {len(cols)}. Expected: 4.')
+        self._labels = cols
+        self._column_styles = column_styles
+
+    def add_row(self, row: List[str]) -> None:
+        if len(row) != 4:
+            raise AssertionError(f'Got invalid number of columns on add_row: 
{len(row)}. Expected: 4.')
+        self._rows.append(row)
+
+    def build_table(self) -> str:
+        return self._template.render(header=f'{self._name}', 
labels=self._labels, column_styles=self._column_styles, rows=self._rows)
+
+
+class JUnitTestCase:
+    """
+    Pretty straightforward in-memory representation of the state of a jUnit 
test. Not the _most_ tolerant of bad input,
+    so don't test your luck.
+    """
+    def __init__(self, testcase: ET.Element) -> None:
+        """
+        From a given xml element, constructs a junit testcase. Doesn't do any 
sanity checking to make sure you gave
+        it something correct, so... don't screw up.
+
+        Here's our general junit formatting:
+
+        <testcase classname = 
"org.apache.cassandra.index.sai.cql.VectorUpdateDeleteTest" name = 
"updateTest-_jdk11" time = "0.314">
+            <failure message = "Result set does not contain a row with pk = 0" 
type = "junit.framework.AssertionFailedError">
+            # The following is stored in failure.text:
+            DETAILED ERROR MESSAGE / STACK TRACE
+            DETAILED ERROR MESSAGE / STACK TRACE
+            ...
+            </failure>
+        </testcase>
+
+        And our skipped format:
+        <testcase 
classname="org.apache.cassandra.distributed.test.PreviewRepairCoordinatorFastTest"
 name="snapshotFailure[PARALLEL/true]-_jdk11" time="0.218">
+          <skipped message="Parallel repair does not perform snapshots" />
+        </testcase>
+
+        Same for errors
+
+        We conflate the <error and <failure sub-elements in our junit failures 
and will need to combine those on processing.
+        """
+        if testcase is None:
+            raise AssertionError('Got an empty testcase; cannot create 
JUnitTestCase from nothing.')
+        # No point in including the whole long o.a.c. thing. At least for now. 
This could bite us later if we end up with
+        # other pathed test cases but /shrug
+        self._class_name = testcase.get('classname', 
'').replace('org.apache.cassandra.', '')  # type: str
+        self._test_name = testcase.get('name', '')  # type: str
+        self._message = 'Passed'  # type: str
+        self._status = JUnitTestStatus.PASSED  # type: JUnitTestStatus
+        self._time = testcase.get('time', '')  # type: str
+
+        # For the current set of tests, we don't have > 1 child tag indicating 
something went wrong. So we check to ensure
+        # that remains true and will assert out if we hit something unexpected.
+        saw_error = 0
+
+        def _check_for_child_element(failure_type: JUnitTestStatus) -> None:
+            """
+            The presence of any failure, error, or skipped child elements 
indicated this test wasn't a normal 'pass'.
+            We want to extract the message from the child if it has one as 
well as update the status of this object,
+            including glomming together ERROR and FAILURE cases here. We 
combine those two as some legit test failures
+            are reported as <error /> in the pytest cases.
+            """
+            nonlocal testcase
+            child = testcase.find(failure_type.label)
+            if child is None:
+                return
+
+            nonlocal saw_error
+            if saw_error != 0:
+                raise AssertionError(f'Got a test with > 1 "bad" state (error, 
failed, skipped). classname: {self._class_name}. test: {self._test_name}.')
+            saw_error = 1
+
+            # We don't know if we're going to have message attribute data, 
text attribute, or text inside our tag. So
+            # we just connect all three
+            final_msg = '-'.join(filter(None, (child.get('message'), 
child.get('text'), child.text)))
+
+            self._message = final_msg
+            if failure_type == JUnitTestStatus.ERROR or failure_type == 
JUnitTestStatus.FAILURE:
+                self._status = JUnitTestStatus.FAILURE
+            else:
+                self._status = failure_type
+
+        _check_for_child_element(JUnitTestStatus.FAILURE)
+        _check_for_child_element(JUnitTestStatus.ERROR)
+        _check_for_child_element(JUnitTestStatus.SKIPPED)
+
+    def row_data(self) -> List[str]:
+        return [self._class_name, self._test_name, 
f"<pre>{self._message}</pre>", str(self._time)]
+
+    def status(self) -> JUnitTestStatus:
+        return self._status
+
+    def message(self) -> str:
+        return self._message
+
+    def __hash__(self) -> int:
+        """
+        We want to allow overwriting of existing combinations of class + test 
names, since our tarballs of results will
+        have us doing potentially duplicate sequential processing of files and 
we just want to keep the most recent one.
+        Of note, sorting the tarball contents and trying to navigate to and 
find the oldest and only process that was
+        _significantly_ slower than just brute-force overwriting this way. 
Like... I gave up after 10 minutes vs. < 1 second.
+        """
+        return hash((self._class_name, self._test_name))
+
+    def __eq__(self, other) -> bool:
+        if isinstance(other, JUnitTestCase):
+            return (self._class_name, self._test_name) == (other._class_name, 
other._test_name)
+        return NotImplemented
+
+    def __str__(self) -> str:
+        """
+        We slice the message here; don't rely on this for anything where you 
need full reporting
+        :return:
+        """
+        clean_msg = self._message.replace('\n', ' ')
+        return (f"JUnitTestCase(class_name='{self._class_name}', "
+                f"name='{self._test_name}', msg='{clean_msg[:50]}', "
+                f"time={self._time}, status={self._status.name})")
+
+
+class JUnitTestSuite:
+    """
+    Straightforward container for a set of tests.
+    """
+
+    def __init__(self, name: str):
+        self._name = name  # type: str
+        self._suites = dict()  # type: Dict[JUnitTestStatus, 
Set[JUnitTestCase]]
+        self._files = set()  # type: Set[str]
+        # We only allow one archive to be associated with each JUnitTestSuite
+        self._archive = 'unknown'  # type: str
+        for status in JUnitTestStatus:
+            self._suites[status] = set()
+
+    def add_testcase(self, newcase: JUnitTestCase) -> None:
+        """
+        Replaces if existing is found.
+        """
+        if newcase.status() == JUnitTestStatus.UNKNOWN:
+            raise AssertionError(f'Attempted to add a testcase with an unknown 
status: {newcase}. Aborting.')
+        self._suites[newcase.status()].discard(newcase)
+        self._suites[newcase.status()].add(newcase)
+
+    def get_tests(self, status: JUnitTestStatus) -> Iterable[JUnitTestCase]:
+        """
+        Returns sorted list of tests, class name first then test name
+        """
+        return sorted(self._suites[status], key=lambda x: (x._class_name, 
x._test_name))
+
+    def passed(self) -> int:
+        return self.count(JUnitTestStatus.PASSED)
+
+    def failed(self) -> int:
+        return self.count(JUnitTestStatus.FAILURE)
+
+    def skipped(self) -> int:
+        return self.count(JUnitTestStatus.SKIPPED)
+
+    def count(self, status: JUnitTestStatus) -> int:
+        return len(self._suites[status])
+
+    def name(self) -> str:
+        return self._name
+
+    def is_empty(self) -> bool:
+        return self.passed() == 0 and self.failed() == 0 and self.skipped() == 0
+
+    def set_archive(self, name: str) -> None:
+        if self._archive != "unknown":
+            msg = f'Attempted to set archive for suite: {self._name} when 
archive already set: {self._archive}. This is a bug.'
+            logger.critical(msg)
+            raise AssertionError(msg)
+        self._archive = name
+
+    def get_archive(self) -> str:
+        return self._archive
+
+    def add_file(self, name: str) -> None:
+        # Just silently noop if we already have one since dupes in tarball 
indicate the same thing effectively. That we have it.
+        self._files.add(name)
+
+    def file_count(self) -> int:
+        """
+        Returns count of _unique_ files associated with this suite, not 
necessarily the _absolute_ count of files, since
+        we don't bother keeping count of multiple instances of a .xml file in 
the tarball.
+        :return:
+        """
+        return len(self._files)
+
+    @staticmethod
+    def headers() -> List[str]:
+        result = ['Suite']
+        for status in JUnitTestStatus.html_cell_order():
+            result.append(status.name)
+        return result
diff --git a/.build/ci/logging.sh b/.build/ci/logging.sh
new file mode 100644
index 0000000000..4fd12b3ec1
--- /dev/null
+++ b/.build/ci/logging.sh
@@ -0,0 +1,124 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+export TEXT_RED="0;31"
+export TEXT_GREEN="0;32"
+export TEXT_LIGHTGREEN="1;32"
+export TEXT_BROWN="0;33"
+export TEXT_YELLOW="1;33"
+export TEXT_BLUE="0;34"
+export TEXT_LIGHTBLUE="1;34"
+export TEXT_PURPLE="0;35"
+export TEXT_LIGHTPURPLE="1;35"
+export TEXT_CYAN="0;36"
+export TEXT_LIGHTCYAN="1;36"
+export TEXT_LIGHTGRAY="0;37"
+export TEXT_WHITE="1;37"
+export TEXT_DARKGRAY="1;30"
+export TEXT_LIGHTRED="1;31"
+
+export SILENCE_LOGGING=false
+export LOG_TO_FILE="${LOG_TO_FILE:-false}"
+
+disable_logging() {
+  export SILENCE_LOGGING=true
+}
+
+enable_logging() {
+  export SILENCE_LOGGING=false
+}
+
+echo_color() {
+  if [[ $LOG_TO_FILE == "true" ]]; then
+    echo "$1"
+  elif [[ $SILENCE_LOGGING != "true" ]]; then
+    echo -e "\033[1;${2}m${1}\033[0m"
+  fi
+}
+
+log_header() {
+  if [[ $SILENCE_LOGGING != "true" ]]; then
+    log_separator
+    echo_color "$1" $TEXT_GREEN
+    log_separator
+  fi
+}
+
+log_progress() {
+  if [[ $SILENCE_LOGGING != "true" ]]; then
+    echo_color "$1" $TEXT_LIGHTCYAN
+  fi
+}
+
+log_info() {
+  if [[ $SILENCE_LOGGING != "true" ]]; then
+    echo_color "$1" $TEXT_LIGHTGRAY
+  fi
+}
+
+log_quiet() {
+  if [[ $SILENCE_LOGGING != "true" ]]; then
+    echo_color "$1" $TEXT_DARKGRAY
+  fi
+}
+
+# For transient always-on debugging
+log_transient() {
+  if [[ $SILENCE_LOGGING != "true" ]]; then
+    echo_color "[TRANSIENT]: $1" $TEXT_BROWN
+  fi
+}
+
+# For durable user-selectable debugging
+log_debug() {
+  if [[ "$SILENCE_LOGGING" = "true" ]]; then
+    return
+  fi
+
+  if [[ "${DEBUG:-false}" == true || "${DEBUG_LOGGING:-false}" == true ]]; then
+    echo_color "[DEBUG] $1" $TEXT_PURPLE
+  fi
+}
+
+log_quiet() {
+  if [[ $SILENCE_LOGGING != "true" ]]; then
+    echo_color "$1" $TEXT_LIGHTGRAY
+  fi
+}
+
+log_todo() {
+  if [[ $SILENCE_LOGGING != "true" ]]; then
+    echo_color "TODO: $1" $TEXT_LIGHTPURPLE
+  fi
+}
+
+log_warning() {
+  if [[ $SILENCE_LOGGING != "true" ]]; then
+    echo_color "WARNING: $1" $TEXT_YELLOW
+  fi
+}
+
+log_error() {
+  if [[ $SILENCE_LOGGING != "true" ]]; then
+    echo_color "ERROR: $1" $TEXT_RED
+  fi
+}
+
+log_separator() {
+  if [[ $SILENCE_LOGGING != "true" ]]; then
+    echo_color "--------------------------------------------" $TEXT_GREEN
+  fi
+}
\ No newline at end of file
diff --git a/.build/ci/logging_helper.py b/.build/ci/logging_helper.py
new file mode 100755
index 0000000000..90f4c9f5c0
--- /dev/null
+++ b/.build/ci/logging_helper.py
@@ -0,0 +1,136 @@
+#!/usr/bin/env python3
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+"""
+We want to add a little functionality on top of built-in logging; 
colorization, some new log levels, and logging
+to a file built-in as well as some other conveniences.
+"""
+
+import logging
+import threading
+from enum import Enum
+from logging import Logger
+from logging.handlers import RotatingFileHandler
+
+
+PROGRESS_LEVEL_NUM = 25
+SECTION_LEVEL_NUM = 26
+logging.addLevelName(PROGRESS_LEVEL_NUM, 'PROGRESS')
+logging.addLevelName(SECTION_LEVEL_NUM, 'SECTION')
+
+
+class LogLevel(Enum):
+    """
+    Matches logging.<VALUE> int levels; wrapped in enum here for convenience
+    """
+    CRITICAL = 50
+    FATAL = CRITICAL
+    ERROR = 40
+    WARNING = 30
+    WARN = WARNING
+    INFO = 20
+    DEBUG = 10
+    NOTSET = 0
+
+
+class CustomLogger(Logger):
+    # Some decorations to match the paradigm used in some other .sh files
+    def progress(self, message: str, *args, **kws) -> None:
+        if self.isEnabledFor(PROGRESS_LEVEL_NUM):
+            self._log(PROGRESS_LEVEL_NUM, message, args, **kws)
+
+    def separator(self, *args, **kws) -> None:
+        if self.isEnabledFor(logging.DEBUG) and 
self.isEnabledFor(SECTION_LEVEL_NUM):
+            self._log(SECTION_LEVEL_NUM, 
'-----------------------------------------------------------------------------',
 args, **kws)
+
+    def header(self, message: str, *args, **kws) -> None:
+        if self.isEnabledFor(logging.DEBUG) and 
self.isEnabledFor(SECTION_LEVEL_NUM):
+            self._log(SECTION_LEVEL_NUM, f'----[{message}]----', args, **kws)
+
+
+logging.setLoggerClass(CustomLogger)
+LOG_FORMAT_STRING = '%(asctime)s - [tid:%(threadid)s] - 
[%(levelname)s]::%(message)s'
+
+
+def build_logger(name: str, verbose: bool) -> CustomLogger:
+    logger = CustomLogger(name)
+    logger.setLevel(logging.INFO)
+    logger.addFilter(ThreadContextFilter())
+
+    stdout_handler = logging.StreamHandler()
+    file_handler = RotatingFileHandler(f'{name}.log')
+
+    formatter = CustomFormatter()
+    stdout_handler.setFormatter(formatter)
+    # Don't want color escape characters in our file logging, so we just use 
the string rather than the whole formatter
+    file_handler.setFormatter(logging.Formatter(LOG_FORMAT_STRING))
+
+    logger.addHandler(stdout_handler)
+    logger.addHandler(file_handler)
+
+    if verbose:
+        logger.setLevel(logging.DEBUG)
+
+    # Prevent root logger propagation from duplicating results
+    logger.propagate = False
+    return logger
+
+
+def set_loglevel(logger: logging.Logger, level: LogLevel) -> None:
+    if logger.handlers:
+        for handler in logger.handlers:
+            handler.setLevel(level.value)
+
+
+def mute_logging(logger: logging.Logger) -> None:
+    if logger.handlers:
+        for handler in logger.handlers:
+            handler.setLevel(logging.CRITICAL + 1)
+
+
+# Since we'll thread, let's point out threadid in our format
+class ThreadContextFilter(logging.Filter):
+    def filter(self, record):
+        record.threadid = threading.get_ident()
+        return True
+
+
+class CustomFormatter(logging.Formatter):
+    grey = "\x1b[38;21m"
+    blue = "\x1b[34;21m"
+    green = "\x1b[32;21m"
+    yellow = "\x1b[33;21m"
+    red = "\x1b[31;21m"
+    bold_red = "\x1b[31;1m"
+    reset = "\x1b[0m"
+    purple = "\x1b[35m"
+
+    FORMATS = {
+        PROGRESS_LEVEL_NUM: blue + LOG_FORMAT_STRING + reset,
+        SECTION_LEVEL_NUM: green + LOG_FORMAT_STRING + reset,
+        logging.DEBUG: purple + LOG_FORMAT_STRING + reset,
+        logging.INFO: grey + LOG_FORMAT_STRING + reset,
+        logging.WARNING: yellow + LOG_FORMAT_STRING + reset,
+        logging.ERROR: red + LOG_FORMAT_STRING + reset,
+        logging.CRITICAL: bold_red + LOG_FORMAT_STRING + reset
+    }
+
+    def format(self, record):
+        log_fmt = self.FORMATS.get(record.levelno, self.format)
+        formatter = logging.Formatter(log_fmt)
+        return formatter.format(record)
diff --git a/.build/check-code.sh b/.build/ci/precommit_check.sh
similarity index 56%
copy from .build/check-code.sh
copy to .build/ci/precommit_check.sh
index d3baec45d9..160cca04c1 100755
--- a/.build/check-code.sh
+++ b/.build/ci/precommit_check.sh
@@ -1,4 +1,4 @@
-#!/bin/sh -e
+#!/bin/bash
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -15,14 +15,35 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-# variables, with defaults
-[ "x${CASSANDRA_DIR}" != "x" ] || { CASSANDRA_DIR="$(dirname "$0")/.."; }
 
-# pre-conditions
-command -v ant >/dev/null 2>&1 || { echo >&2 "ant needs to be installed"; exit 
1; }
-[ -d "${CASSANDRA_DIR}" ] || { echo >&2 "Directory ${CASSANDRA_DIR} must 
exist"; exit 1; }
-[ -f "${CASSANDRA_DIR}/build.xml" ] || { echo >&2 "${CASSANDRA_DIR}/build.xml 
must exist"; exit 1; }
+source "logging.sh"
 
-# execute
-ant -f "${CASSANDRA_DIR}/build.xml" check dependency-check
-exit $?
+skip_mypy=(
+    "./logging_helper.py"
+)
+
+failed=0
+log_progress "Linting ci_parser..."
+for i in `find . -maxdepth 1 -name "*.py"`; do
+    log_progress "Checking $i..."
+    flake8 "$i"
+    if [[ $? != 0 ]]; then
+        failed=1
+    fi
+
+    if [[ ! " ${skip_mypy[*]} " =~ ${i} ]]; then
+        mypy --ignore-missing-imports "$i"
+        if [[ $? != 0 ]]; then
+            failed=1
+        fi
+    fi
+done
+
+
+if [[ $failed -eq 1 ]]; then
+    log_error "Failed linting. See above errors; don't merge until clean."
+    exit 1
+else
+    log_progress "All scripts passed checks"
+    exit 0
+fi
diff --git a/.build/check-code.sh b/.build/ci/requirements.txt
old mode 100755
new mode 100644
similarity index 61%
copy from .build/check-code.sh
copy to .build/ci/requirements.txt
index d3baec45d9..7d92de5922
--- a/.build/check-code.sh
+++ b/.build/ci/requirements.txt
@@ -1,4 +1,3 @@
-#!/bin/sh -e
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -15,14 +14,8 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-# variables, with defaults
-[ "x${CASSANDRA_DIR}" != "x" ] || { CASSANDRA_DIR="$(dirname "$0")/.."; }
-
-# pre-conditions
-command -v ant >/dev/null 2>&1 || { echo >&2 "ant needs to be installed"; exit 
1; }
-[ -d "${CASSANDRA_DIR}" ] || { echo >&2 "Directory ${CASSANDRA_DIR} must 
exist"; exit 1; }
-[ -f "${CASSANDRA_DIR}/build.xml" ] || { echo >&2 "${CASSANDRA_DIR}/build.xml 
must exist"; exit 1; }
+#
+# Changes to this file must also be put into
 
-# execute
-ant -f "${CASSANDRA_DIR}/build.xml" check dependency-check
-exit $?
+beautifulsoup4==4.12.3
+jinja2==3.1.3
diff --git a/.build/docker/_docker_run.sh b/.build/docker/_docker_run.sh
index 659f5bacf3..5d93563484 100755
--- a/.build/docker/_docker_run.sh
+++ b/.build/docker/_docker_run.sh
@@ -26,10 +26,14 @@
 #
 ################################
 
+[ $DEBUG ] && set -x
+
 # variables, with defaults
 [ "x${cassandra_dir}" != "x" ] || cassandra_dir="$(readlink -f $(dirname 
"$0")/../..)"
 [ "x${build_dir}" != "x" ] || build_dir="${cassandra_dir}/build"
+[ "x${m2_dir}" != "x" ] || m2_dir="${HOME}/.m2/repository"
 [ -d "${build_dir}" ] || { mkdir -p "${build_dir}" ; }
+[ -d "${m2_dir}" ] || { mkdir -p "${m2_dir}" ; }
 
 java_version_default=`grep 'property\s*name="java.default"' 
${cassandra_dir}/build.xml |sed -ne 's/.*value="\([^"]*\)".*/\1/p'`
 java_version_supported=`grep 'property\s*name="java.supported"' 
${cassandra_dir}/build.xml |sed -ne 's/.*value="\([^"]*\)".*/\1/p'`
@@ -118,11 +122,11 @@ docker_command="export 
ANT_OPTS=\"-Dbuild.dir=\${DIST_DIR} ${CASSANDRA_DOCKER_AN
 # run without the default seccomp profile
 # re-use the host's maven repository
 container_id=$(docker run --name ${container_name} -d --security-opt 
seccomp=unconfined --rm \
-    -v "${cassandra_dir}":/home/build/cassandra -v 
~/.m2/repository/:/home/build/.m2/repository/ -v "${build_dir}":/dist \
+    -v "${cassandra_dir}":/home/build/cassandra -v 
${m2_dir}:/home/build/.m2/repository/ -v "${build_dir}":/dist \
     ${docker_volume_opt} \
     ${image_name} sleep 1h)
 
-echo "Running container ${container_name} ${container_id}"
+echo "Running container ${container_name} ${container_id} using image 
${image_name}"
 
 docker exec --user root ${container_name} bash -c 
"\${CASSANDRA_DIR}/.build/docker/_create_user.sh build $(id -u) $(id -g)"
 docker exec --user build ${container_name} bash -c "${docker_command}"
diff --git a/.build/check-code.sh b/.build/docker/build-jars.sh
similarity index 62%
copy from .build/check-code.sh
copy to .build/docker/build-jars.sh
index d3baec45d9..b8039cb0a0 100755
--- a/.build/check-code.sh
+++ b/.build/docker/build-jars.sh
@@ -1,4 +1,4 @@
-#!/bin/sh -e
+#!/bin/bash
 # Licensed to the Apache Software Foundation (ASF) under one
 # or more contributor license agreements.  See the NOTICE file
 # distributed with this work for additional information
@@ -15,14 +15,8 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-# variables, with defaults
-[ "x${CASSANDRA_DIR}" != "x" ] || { CASSANDRA_DIR="$(dirname "$0")/.."; }
-
-# pre-conditions
-command -v ant >/dev/null 2>&1 || { echo >&2 "ant needs to be installed"; exit 
1; }
-[ -d "${CASSANDRA_DIR}" ] || { echo >&2 "Directory ${CASSANDRA_DIR} must 
exist"; exit 1; }
-[ -f "${CASSANDRA_DIR}/build.xml" ] || { echo >&2 "${CASSANDRA_DIR}/build.xml 
must exist"; exit 1; }
+#
+# Build the jars
 
-# execute
-ant -f "${CASSANDRA_DIR}/build.xml" check dependency-check
+$(dirname "$0")/_docker_run.sh bullseye-build.docker build-jars.sh $1
 exit $?
diff --git a/.build/docker/bullseye-build.docker 
b/.build/docker/bullseye-build.docker
index 7eb928bf5a..92881aeba1 100644
--- a/.build/docker/bullseye-build.docker
+++ b/.build/docker/bullseye-build.docker
@@ -51,3 +51,6 @@ RUN update-java-alternatives --set java-1.11.0-openjdk-$(dpkg 
--print-architectu
 
 # python3 is needed for the gen-doc target
 RUN pip install --upgrade pip
+
+# dependencies for .build/ci/ci_parser.py
+RUN pip install beautifulsoup4==4.12.3 jinja2==3.1.3
diff --git a/.build/docker/run-tests.sh b/.build/docker/run-tests.sh
index 0df537978c..17e2682a31 100755
--- a/.build/docker/run-tests.sh
+++ b/.build/docker/run-tests.sh
@@ -15,14 +15,10 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-#
-#
 # A wrapper script to run-tests.sh (or dtest-python.sh) in docker.
 #  Can split (or grep) the test list into multiple docker runs, collecting 
results.
-#
-# Each split chunk may be further parallelised over docker containers based on 
the host's available cpu and memory (and the test type).
-#  Define env variable DISABLE_INNER_SPLITS to disable inner splitting.
-#
+
+[ $DEBUG ] && set -x
 
 # help
 if [ "$#" -lt 1 ] || [ "$#" -gt 3 ] || [ "$1" == "-h" ]; then
@@ -38,7 +34,9 @@ fi
 [ "x${cassandra_dir}" != "x" ] || cassandra_dir="$(readlink -f $(dirname 
"$0")/../..)"
 [ "x${cassandra_dtest_dir}" != "x" ] || 
cassandra_dtest_dir="${cassandra_dir}/../cassandra-dtest"
 [ "x${build_dir}" != "x" ] || build_dir="${cassandra_dir}/build"
+[ "x${m2_dir}" != "x" ] || m2_dir="${HOME}/.m2/repository"
 [ -d "${build_dir}" ] || { mkdir -p "${build_dir}" ; }
+[ -d "${m2_dir}" ] || { mkdir -p "${m2_dir}" ; }
 
 # pre-conditions
 command -v docker >/dev/null 2>&1 || { echo >&2 "docker needs to be 
installed"; exit 1; }
@@ -84,21 +82,30 @@ pushd ${cassandra_dir}/.build >/dev/null
 dockerfile="ubuntu2004_test.docker"
 image_tag="$(md5sum docker/${dockerfile} | cut -d' ' -f1)"
 image_name="apache/cassandra-${dockerfile/.docker/}:${image_tag}"
-docker_mounts="-v ${cassandra_dir}:/home/cassandra/cassandra -v 
"${build_dir}":/home/cassandra/cassandra/build -v 
${HOME}/.m2/repository:/home/cassandra/.m2/repository"
+docker_mounts="-v ${cassandra_dir}:/home/cassandra/cassandra -v 
"${build_dir}":/home/cassandra/cassandra/build -v 
${m2_dir}:/home/cassandra/.m2/repository"
 # HACK hardlinks in overlay are buggy, the following mount prevents hardlinks 
from being used. ref $TMP_DIR in .build/run-tests.sh
 docker_mounts="${docker_mounts} -v 
"${build_dir}/tmp":/home/cassandra/cassandra/build/tmp"
 
 # Look for existing docker image, otherwise build
 if ! ( [[ "$(docker images -q ${image_name} 2>/dev/null)" != "" ]] ) ; then
+  echo "Build image not found locally"
   # try docker login to increase dockerhub rate limits
+  echo "Attempting 'docker login' to increase dockerhub rate limits"
   timeout -k 5 5 docker login >/dev/null 2>/dev/null
+  echo "Pulling build image..."
   if ! ( docker pull -q ${image_name} >/dev/null 2>/dev/null ) ; then
     # Create build images containing the build tool-chain, Java and an Apache 
Cassandra git working directory, with retry
+    echo "Building docker image ${image_name}..."
     until docker build -t ${image_name} -f docker/${dockerfile} .  ; do
-        echo "docker build failed… trying again in 10s… "
-        sleep 10
+      echo "docker build failed… trying again in 10s… "
+      sleep 10
     done
-  fi
+    echo "Docker image ${image_name} has been built"
+  else
+    echo "Successfully pulled build image."
+  fi  
+else
+    echo "Found build image locally."
 fi
 
 pushd ${cassandra_dir} >/dev/null
@@ -112,30 +119,41 @@ if [[ ! -z ${JENKINS_URL+x} ]] && [[ ! -z ${NODE_NAME+x} 
]] ; then
 fi
 
 # find host's available cores and mem
-cores=1
-command -v nproc >/dev/null 2>&1 && cores=$(nproc --all)
-mem=1
-# linux
-command -v free >/dev/null 2>&1 && mem=$(free -b | grep Mem: | awk '{print 
$2}')
-# macos
-sysctl -n hw.memsize >/dev/null 2>&1 && mem=$(sysctl -n hw.memsize)
+cores=$(docker run --rm alpine nproc --all) || { echo >&2 "Unable to check 
available CPU cores"; exit 1; }
+
+case $(uname) in
+    "Linux")
+        mem=$(docker run --rm alpine free -b | grep Mem: | awk '{print $2}') 
|| { echo >&2 "Unable to check available memory"; exit 1; }
+        ;;
+    "Darwin")
+        mem=$(sysctl -n hw.memsize) || { echo >&2 "Unable to check available 
memory"; exit 1; }
+        ;;
+    *)
+        echo >&2 "Unsupported operating system, expected Linux or Darwin"
+        exit 1
+esac
 
 # figure out resource limits, scripts, and mounts for the test type
 case ${target} in
+    "build_dtest_jars")
+    ;;
+    "stress-test" | "fqltool-test" )
+        [[ ${mem} -gt $((1 * 1024 * 1024 * 1024 * ${jenkins_executors})) ]] || 
{ echo >&2 "${target} require minimum docker memory 1g (per jenkins executor 
(${jenkins_executors})), found ${mem}"; exit 1; }
+    ;;
     # test-burn doesn't have enough tests in it to split beyond 8, and burn 
and long we want a bit more resources anyway
-    "stress-test" | "fqltool-test" | "microbench" | "test-burn" | "long-test" 
| "cqlsh-test" )
-        [[ ${mem} -gt $((5 * 1024 * 1024 * 1024 * ${jenkins_executors})) ]] || 
{ echo >&2 "tests require minimum docker memory 6g (per jenkins executor 
(${jenkins_executors})), found ${mem}"; exit 1; }
+    "microbench" | "test-burn" | "long-test" | "cqlsh-test" )
+        [[ ${mem} -gt $((5 * 1024 * 1024 * 1024 * ${jenkins_executors})) ]] || 
{ echo >&2 "${target} require minimum docker memory 6g (per jenkins executor 
(${jenkins_executors})), found ${mem}"; exit 1; }
     ;;
-    "dtest" | "dtest-novnode" | "dtest-offheap" | "dtest-large" | 
"dtest-large-novnode" | "dtest-upgrade" | "dtest-upgrade-novnode"| 
"dtest-upgrade-large" | "dtest-upgrade-novnode-large" )
+    "dtest" | "dtest-novnode" | "dtest-latest" | "dtest-large" | 
"dtest-large-novnode" | "dtest-upgrade" | "dtest-upgrade-novnode"| 
"dtest-upgrade-large" | "dtest-upgrade-novnode-large" )
         [ -f "${cassandra_dtest_dir}/dtest.py" ] || { echo >&2 
"${cassandra_dtest_dir}/dtest.py must exist"; exit 1; }
-        [[ ${mem} -gt $((15 * 1024 * 1024 * 1024 * ${jenkins_executors})) ]] 
|| { echo >&2 "dtests require minimum docker memory 16g (per jenkins executor 
(${jenkins_executors})), found ${mem}"; exit 1; }
+        [[ ${mem} -gt $((15 * 1024 * 1024 * 1024 * ${jenkins_executors})) ]] 
|| { echo >&2 "${target} require minimum docker memory 16g (per jenkins 
executor (${jenkins_executors})), found ${mem}"; exit 1; }
         test_script="run-python-dtests.sh"
         docker_mounts="${docker_mounts} -v 
${cassandra_dtest_dir}:/home/cassandra/cassandra-dtest"
         # check that ${cassandra_dtest_dir} is valid
         [ -f "${cassandra_dtest_dir}/dtest.py" ] || { echo >&2 
"${cassandra_dtest_dir}/dtest.py not found. please specify 
'cassandra_dtest_dir' to point to the local cassandra-dtest source"; exit 1; }
     ;;
     "test"| "test-cdc" | "test-compression" | "test-oa" | 
"test-system-keyspace-directory" | "test-latest" | "jvm-dtest" | 
"jvm-dtest-upgrade" | "jvm-dtest-novnode" | "jvm-dtest-upgrade-novnode" | 
"simulator-dtest")
-        [[ ${mem} -gt $((5 * 1024 * 1024 * 1024 * ${jenkins_executors})) ]] || 
{ echo >&2 "tests require minimum docker memory 6g (per jenkins executor 
(${jenkins_executors})), found ${mem}"; exit 1; }
+        [[ ${mem} -gt $((5 * 1024 * 1024 * 1024 * ${jenkins_executors})) ]] || 
{ echo >&2 "${target} require minimum docker memory 6g (per jenkins executor 
(${jenkins_executors})), found ${mem}"; exit 1; }
         max_docker_runs_by_cores=$( echo "sqrt( ${cores} / 
${jenkins_executors} )" | bc )
         max_docker_runs_by_mem=$(( ${mem} / ( 5 * 1024 * 1024 * 1024 * 
${jenkins_executors} ) ))
     ;;
@@ -164,14 +182,15 @@ fi
 docker_flags="${docker_flags} -d --rm"
 
 # make sure build_dir is good
-mkdir -p ${build_dir}/tmp || true
-mkdir -p ${build_dir}/test/logs || true
-mkdir -p ${build_dir}/test/output || true
-chmod -R ag+rwx ${build_dir}
+mkdir -p "${build_dir}/tmp" || true
+mkdir -p "${build_dir}/test/logs" || true
+mkdir -p "${build_dir}/test/output" || true
+mkdir -p "${build_dir}/test/reports" || true
+chmod -R ag+rwx "${build_dir}"
 
 # define testtag.extra so tests can be aggregated together. (jdk is already 
appended in build.xml)
-case ${target} in
-    "cqlsh-test" | "dtest" | "dtest-novnode" | "dtest-offheap" | "dtest-large" 
| "dtest-large-novnode" | "dtest-upgrade" | "dtest-upgrade-large" | 
"dtest-upgrade-novnode" | "dtest-upgrade-novnode-large" )
+case "${target}" in
+    "cqlsh-test" | "dtest" | "dtest-novnode" | "dtest-latest" | "dtest-large" 
| "dtest-large-novnode" | "dtest-upgrade" | "dtest-upgrade-large" | 
"dtest-upgrade-novnode" | "dtest-upgrade-novnode-large" )
         ANT_OPTS="-Dtesttag.extra=_$(arch)_python${python_version/./-}"
     ;;
     "jvm-dtest-novnode" | "jvm-dtest-upgrade-novnode" )
@@ -224,9 +243,11 @@ echo "Running container ${container_name} ${docker_id}"
 docker exec --user root ${container_name} bash -c 
"\${CASSANDRA_DIR}/.build/docker/_create_user.sh cassandra $(id -u) $(id -g)" | 
tee -a ${logfile}
 docker exec --user root ${container_name} update-alternatives --set python 
/usr/bin/python${python_version} | tee -a ${logfile}
 
-# capture logs and pid for container
+# capture logs and status
+set -o pipefail
 docker exec --user cassandra ${container_name} bash -c "${docker_command}" | 
tee -a ${logfile}
 status=$?
+set +o pipefail
 
 if [ "$status" -ne 0 ] ; then
     echo "${docker_id} failed (${status}), debug…"
diff --git a/.build/run-python-dtests.sh b/.build/run-python-dtests.sh
index 360c8b68d2..03080476c2 100755
--- a/.build/run-python-dtests.sh
+++ b/.build/run-python-dtests.sh
@@ -117,8 +117,8 @@ if [ "${DTEST_TARGET}" = "dtest" ]; then
     DTEST_ARGS="--use-vnodes --num-tokens=${NUM_TOKENS} 
--skip-resource-intensive-tests"
 elif [ "${DTEST_TARGET}" = "dtest-novnode" ]; then
     DTEST_ARGS="--skip-resource-intensive-tests --keep-failed-test-dir"
-elif [ "${DTEST_TARGET}" = "dtest-offheap" ]; then
-    DTEST_ARGS="--use-vnodes --num-tokens=${NUM_TOKENS} 
--use-off-heap-memtables --skip-resource-intensive-tests"
+elif [ "${DTEST_TARGET}" = "dtest-latest" ]; then
+    DTEST_ARGS="--use-vnodes --num-tokens=${NUM_TOKENS} 
--configuration-yaml=cassandra_latest.yaml --skip-resource-intensive-tests"
 elif [ "${DTEST_TARGET}" = "dtest-large" ]; then
     DTEST_ARGS="--use-vnodes --num-tokens=${NUM_TOKENS} 
--only-resource-intensive-tests --force-resource-intensive-tests"
 elif [ "${DTEST_TARGET}" = "dtest-large-novnode" ]; then
@@ -145,6 +145,7 @@ if [[ "${DTEST_SPLIT_CHUNK}" =~ ^[0-9]+/[0-9]+$ ]]; then
     ( split --help 2>&1 ) | grep -q "r/K/N" || split_cmd=gsplit
     command -v ${split_cmd} >/dev/null 2>&1 || { echo >&2 "${split_cmd} needs 
to be installed"; exit 1; }
     SPLIT_TESTS=$(${split_cmd} -n r/${DTEST_SPLIT_CHUNK} 
${DIST_DIR}/test_list.txt)
+    SPLIT_STRING="_${DTEST_SPLIT_CHUNK//\//_}"
 elif [[ "x" != "x${DTEST_SPLIT_CHUNK}" ]] ; then
     SPLIT_TESTS=$(grep -e "${DTEST_SPLIT_CHUNK}" ${DIST_DIR}/test_list.txt)
     [[ "x" != "x${SPLIT_TESTS}" ]] || { echo "no tests match regexp 
\"${DTEST_SPLIT_CHUNK}\""; exit 1; }
@@ -152,9 +153,10 @@ else
     SPLIT_TESTS=$(cat ${DIST_DIR}/test_list.txt)
 fi
 
+pytest_results_file="${DIST_DIR}/test/output/nosetests.xml"
+pytest_opts="-vv --log-cli-level=DEBUG --junit-xml=${pytest_results_file} 
--junit-prefix=${DTEST_TARGET} -s"
 
-PYTEST_OPTS="-vv --log-cli-level=DEBUG 
--junit-xml=${DIST_DIR}/test/output/nosetests.xml 
--junit-prefix=${DTEST_TARGET} -s"
-pytest ${PYTEST_OPTS} --cassandra-dir=${CASSANDRA_DIR} --keep-failed-test-dir 
${DTEST_ARGS} ${SPLIT_TESTS} 2>&1 | tee -a ${DIST_DIR}/test_stdout.txt
+pytest ${pytest_opts} --cassandra-dir=${CASSANDRA_DIR} --keep-failed-test-dir 
${DTEST_ARGS} ${SPLIT_TESTS} 2>&1 | tee -a ${DIST_DIR}/test_stdout.txt
 
 # tar up any ccm logs for easy retrieval
 if ls ${TMPDIR}/*/test/*/logs/* &>/dev/null ; then
@@ -164,10 +166,13 @@ fi
 
 # merge all unit xml files into one, and print summary test numbers
 pushd ${CASSANDRA_DIR}/ >/dev/null
-# remove <testsuites> wrapping elements. `ant generate-unified-test-report` 
doesn't like it`
-sed -r "s/<[\/]?testsuites>//g" ${DIST_DIR}/test/output/nosetests.xml > 
${TMPDIR}/nosetests.xml
-cat ${TMPDIR}/nosetests.xml > ${DIST_DIR}/test/output/nosetests.xml
-ant -quiet -silent generate-unified-test-report
+# remove <testsuites> wrapping elements. ant generate-test-report` doesn't 
like it, and update testsuite name
+sed -r "s/<[\/]?testsuites>//g" ${pytest_results_file} > 
${TMPDIR}/nosetests.xml
+cat ${TMPDIR}/nosetests.xml > ${pytest_results_file}
+sed "s/testsuite name=\"Cassandra dtests\"/testsuite 
name=\"${DTEST_TARGET}_jdk${java_version}_python${python_version}_cython${cython}_$(uname
 -m)${SPLIT_STRING}\"/g" ${pytest_results_file} > ${TMPDIR}/nosetests.xml
+cat ${TMPDIR}/nosetests.xml > ${pytest_results_file}
+
+ant -quiet -silent generate-test-report
 popd  >/dev/null
 
 ################################
diff --git a/.build/run-tests.sh b/.build/run-tests.sh
index 80c07a470f..21fbf77c9b 100755
--- a/.build/run-tests.sh
+++ b/.build/run-tests.sh
@@ -180,13 +180,14 @@ _main() {
   # ant test setup
   export TMP_DIR="${DIST_DIR}/tmp"
   [ -d ${TMP_DIR} ] || mkdir -p "${TMP_DIR}"
-  export ANT_TEST_OPTS="-Dno-build-test=true -Dtmp.dir=${TMP_DIR}"
+  export ANT_TEST_OPTS="-Dno-build-test=true -Dtmp.dir=${TMP_DIR} 
-Dbuild.test.output.dir=${DIST_DIR}/test/output/${target}"
 
   # fresh virtualenv and test logs results everytime
-  [[ "/" == "${DIST_DIR}" ]] || rm -rf "${DIST_DIR}/test/{html,output,logs}"
+  [[ "/" == "${DIST_DIR}" ]] || rm -rf 
"${DIST_DIR}/test/{html,output,logs,reports}"
 
   # cheap trick to ensure dependency libraries are in place. allows us to 
stash only project specific build artifacts.
-  ant -quiet -silent resolver-dist-lib
+  #  also recreate some of the non-build files we need
+  ant -quiet -silent resolver-dist-lib _createVersionPropFile
 
   case ${target} in
     "stress-test")
@@ -240,6 +241,9 @@ _main() {
       fi
       ant testclasslist -Dtest.classlistprefix=distributed 
-Dtest.timeout=$(_timeout_for "test.distributed.timeout") 
-Dtest.classlistfile=<(echo "${testlist}") ${ANT_TEST_OPTS} || echo "failed 
${target} ${split_chunk}"
       ;;
+    "build_dtest_jars")
+      _build_all_dtest_jars
+      ;;
     "jvm-dtest-upgrade" | "jvm-dtest-upgrade-novnode")
       _build_all_dtest_jars
       [ "jvm-dtest-upgrade-novnode" == "${target}" ] || 
ANT_TEST_OPTS="${ANT_TEST_OPTS} -Dcassandra.dtest.num_tokens=16"
@@ -262,7 +266,7 @@ _main() {
   esac
 
   # merge all unit xml files into one, and print summary test numbers
-  ant -quiet -silent generate-unified-test-report
+  ant -quiet -silent generate-test-report
 
   popd  >/dev/null
 }
diff --git a/.jenkins/Jenkinsfile b/.jenkins/Jenkinsfile
index 4e1425f9ab..1bb2fc0e11 100644
--- a/.jenkins/Jenkinsfile
+++ b/.jenkins/Jenkinsfile
@@ -1,3 +1,4 @@
+#!/usr/bin/env groovy
 // Licensed to the Apache Software Foundation (ASF) under one
 // or more contributor license agreements.  See the NOTICE file
 // distributed with this work for additional information
@@ -11,762 +12,533 @@
 // Unless required by applicable law or agreed to in writing, software
 // distributed under the License is distributed on an "AS IS" BASIS,
 // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// Se# Licensed to the Apache Software Foundation (ASF) under onee the License 
for the specific language governing permissions and
+// See the License for the specific language governing permissions and
 // limitations under the License.
 //
 //
-// Jenkins declaration of how to build and test the current codebase.
-//  Jenkins infrastructure related settings should be kept in
-//    
https://github.com/apache/cassandra-builds/blob/trunk/jenkins-dsl/cassandra_job_dsl_seed.groovy
+// Jenkins CI declaration.
+//
+// The declarative pipeline is presented first as a high level view.
+//
+// Build and Test Stages are dynamic, the full possible list defined by the 
`tasks()` function.
+// There is a choice of pipeline profles with sets of tasks that are run, see 
`pipelineProfiles()`.
+//
+// All tasks use the dockerised CI-agnostic scripts found under 
`.build/docker/`
+// The `type: test` always `.build/docker/run-tests.sh`
+//
+//
+// This Jenkinsfile is expected to work on any Jenkins infrastructure.
+// The controller should have 4 cpu, 12GB ram (and be configured to use 
`-XX:+UseG1GC -Xmx8G`)
+// It is required to have agents providing five labels, each that can provide 
docker and the following capabilities:
+//  - cassandra-amd64-small  : 1 cpu, 1GB ram
+//  - cassandra-small        : 1 cpu, 1GB ram (alias for above but for any 
arch)
+//  - cassandra-amd64-medium : 3 cpu, 5GB ram
+//  - cassandra-medium       : 3 cpu, 5GB ram (alias for above but for any 
arch)
+//  - cassandra-amd64-large  : 7 cpu, 14GB ram
+//
+// When running builds parameterised to other architectures the corresponding 
labels are expected.
+//  For example 'arm64' requires the labels: cassandra-arm64-small, 
cassandra-arm64-medium, cassandra-arm64-large.
+//
+// The built-in node must has the "controller" label.  There must be more than 
two agents for each label.
+//
+// Plugins required are:
+//  git, workflow-job, workflow-cps, junit, workflow-aggregator, ws-cleanup, 
pipeline-build-step, test-stability, copyartifact.
+//
+// Any functionality that depends upon ASF Infra ( i.e. the canonical 
ci-cassandra.a.o )
+//  will be ignored when run on other environments.
+// Note there are also differences when CI is being run pre- or post-commit.
+//
 //
 // Validate/lint this file using the following command
 // `curl -X POST  -F "jenkinsfile=<.jenkins/Jenkinsfile" 
https://ci-cassandra.apache.org/pipeline-model-converter/validate`
+//
+
 
 pipeline {
-  agent { label 'cassandra' }
+  agent { label 'cassandra-small' }
+  parameters {
+    string(name: 'repository', defaultValue: scm.userRemoteConfigs[0].url, 
description: 'Cassandra Repository')
+    string(name: 'branch', defaultValue: env.BRANCH_NAME, description: 
'Branch')
+
+    choice(name: 'profile', choices: pipelineProfiles().keySet() as List, 
description: 'Pick a pipeline profile.')
+    string(name: 'profile_custom_regexp', defaultValue: '', description: 
'Regexp for stages when using custom profile. See `testSteps` in Jenkinsfile 
for list of stages. Example: stress.*|jvm-dtest.*')
+
+    choice(name: 'architecture', choices: archsSupported() + "all", 
description: 'Pick architecture. The ARM64 is disabled by default at the 
moment.')
+    string(name: 'jdk', defaultValue: "", description: 'Restrict JDK versions. 
(e.g. "11", "17", etc)')
+
+    string(name: 'dtest_repository', defaultValue: 
'https://github.com/apache/cassandra-dtest' ,description: 'Cassandra DTest 
Repository')
+    string(name: 'dtest_branch', defaultValue: 'trunk', description: 'DTest 
Branch')
+  }
   stages {
-    stage('Init') {
+    stage('jar') {
+      // the jar stage executes only the 'jar' build step, via the build(…) 
function
+      // the results of these (per jdk, per arch) are then stashed and used 
for every other build and test step
       steps {
-          cleanWs()
-          script {
-              currentBuild.result='SUCCESS'
-          }
+        script {
+          parallel(getJarTasks())
+        }
       }
     }
-    stage('Build') {
+    stage('Tests') {
+      // the Tests stage executes all other build and task steps.
+      // build steps are sent to the build(…) function, test steps sent to the 
test(…) function
+      // these steps are parameterised and split by the tasks() function
+      when {
+        expression { hasNonJarTasks() }
+      }
       steps {
-       script {
-        def attempt = 1
-        retry(2) {
-          if (attempt > 1) {
-            sleep(60 * attempt)
-          }
-          attempt = attempt + 1
-          build job: "${env.JOB_NAME}-artifacts"
+        script {
+          parallel(tasks()['tests'])
         }
-       }
       }
     }
-    stage('Test') {
-      parallel {
-        stage('stress') {
-          steps {
-            script {
-              def attempt = 1
-              while (attempt <=2) {
-                if (attempt > 1) {
-                  sleep(60 * attempt)
-                }
-                attempt = attempt + 1
-                stress = build job: "${env.JOB_NAME}-stress-test", propagate: 
false
-                if (stress.result != 'FAILURE') break
-              }
-              if (stress.result != 'SUCCESS') unstable('stress test failures')
-              if (stress.result == 'FAILURE') currentBuild.result='FAILURE'
-            }
-          }
-          post {
-            always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('stress-test', stress.getNumber())
-                    }
-                }
-            }
-          }
-        }
-        stage('fqltool') {
-          steps {
-              script {
-                def attempt = 1
-                while (attempt <=2) {
-                  if (attempt > 1) {
-                    sleep(60 * attempt)
-                  }
-                  attempt = attempt + 1
-                  fqltool = build job: "${env.JOB_NAME}-fqltool-test", 
propagate: false
-                  if (fqltool.result != 'FAILURE') break
-                }
-                if (fqltool.result != 'SUCCESS') unstable('fqltool test 
failures')
-                if (fqltool.result == 'FAILURE') currentBuild.result='FAILURE'
-              }
-          }
-          post {
-            always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('fqltool-test', fqltool.getNumber())
-                    }
-                }
-            }
-          }
-        }
-        stage('units') {
-          steps {
-            script {
-                def attempt = 1
-                while (attempt <=2) {
-                  if (attempt > 1) {
-                    sleep(60 * attempt)
-                  }
-                  attempt = attempt + 1
-                  test = build job: "${env.JOB_NAME}-test", propagate: false
-                  if (test.result != 'FAILURE') break
-              }
-              if (test.result != 'SUCCESS') unstable('unit test failures')
-              if (test.result == 'FAILURE') currentBuild.result='FAILURE'
-            }
-          }
-          post {
-            always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('test', test.getNumber())
-                    }
-                }
-            }
-          }
-        }
-        stage('long units') {
-          steps {
-            script {
-                def attempt = 1
-                while (attempt <=2) {
-                  if (attempt > 1) {
-                    sleep(60 * attempt)
-                  }
-                  attempt = attempt + 1
-                  long_test = build job: "${env.JOB_NAME}-long-test", 
propagate: false
-                  if (long_test.result != 'FAILURE') break
-              }
-              if (long_test.result != 'SUCCESS') unstable('long unit test 
failures')
-              if (long_test.result == 'FAILURE') currentBuild.result='FAILURE'
-            }
-          }
-          post {
-            always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('long-test', long_test.getNumber())
-                    }
-                }
-            }
-          }
-        }
-        stage('burn') {
-          steps {
-            script {
-                def attempt = 1
-                while (attempt <=2) {
-                  if (attempt > 1) {
-                    sleep(60 * attempt)
-                  }
-                  attempt = attempt + 1
-                  burn = build job: "${env.JOB_NAME}-test-burn", propagate: 
false
-                  if (burn.result != 'FAILURE') break
-              }
-              if (burn.result != 'SUCCESS') unstable('burn test failures')
-              if (burn.result == 'FAILURE') currentBuild.result='FAILURE'
-            }
-          }
-          post {
-            always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('test-burn', burn.getNumber())
-                    }
-                }
-            }
-          }
-        }
-        stage('cdc') {
-          steps {
-            script {
-                def attempt = 1
-                while (attempt <=2) {
-                  if (attempt > 1) {
-                    sleep(60 * attempt)
-                  }
-                  attempt = attempt + 1
-                  cdc = build job: "${env.JOB_NAME}-test-cdc", propagate: false
-                  if (cdc.result != 'FAILURE') break
-              }
-              if (cdc.result != 'SUCCESS') unstable('cdc failures')
-              if (cdc.result == 'FAILURE') currentBuild.result='FAILURE'
-            }
-          }
-          post {
-            always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('test-cdc', cdc.getNumber())
-                    }
-                }
-            }
-          }
-        }
-        stage('compression') {
-          steps {
-            script {
-                def attempt = 1
-                while (attempt <=2) {
-                  if (attempt > 1) {
-                    sleep(60 * attempt)
-                  }
-                  attempt = attempt + 1
-                  compression = build job: "${env.JOB_NAME}-test-compression", 
propagate: false
-                  if (compression.result != 'FAILURE') break
-              }
-              if (compression.result != 'SUCCESS') unstable('compression 
failures')
-              if (compression.result == 'FAILURE') 
currentBuild.result='FAILURE'
-            }
-          }
-          post {
-            always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('test-compression', 
compression.getNumber())
-                    }
-                }
-            }
-          }
-        }
-        stage('oa') {
-          steps {
-            script {
-                def attempt = 1
-                while (attempt <=2) {
-                  if (attempt > 1) {
-                    sleep(60 * attempt)
-                  }
-                  attempt = attempt + 1
-                  oa = build job: "${env.JOB_NAME}-test-oa", propagate: false
-                  if (oa.result != 'FAILURE') break
-              }
-              if (oa.result != 'SUCCESS') unstable('oa failures')
-              if (oa.result == 'FAILURE') currentBuild.result='FAILURE'
-            }
-          }
-          post {
-            always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('test-oa', oa.getNumber())
-                    }
-                }
-            }
-          }
-        }
-        stage('system-keyspace-directory') {
-          steps {
-            script {
-                def attempt = 1
-                while (attempt <=2) {
-                  if (attempt > 1) {
-                    sleep(60 * attempt)
-                  }
-                  attempt = attempt + 1
-                  system_keyspace_directory = build job: 
"${env.JOB_NAME}-test-system-keyspace-directory", propagate: false
-                  if (system_keyspace_directory.result != 'FAILURE') break
-              }
-              if (system_keyspace_directory.result != 'SUCCESS') 
unstable('system-keyspace-directory failures')
-              if (system_keyspace_directory.result == 'FAILURE') 
currentBuild.result='FAILURE'
-            }
-          }
-          post {
-            always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('test-system-keyspace-directory', 
system_keyspace_directory.getNumber())
-                    }
-                }
-            }
-          }
-        }
-        stage('latest') {
-          steps {
-            script {
-                def attempt = 1
-                while (attempt <=2) {
-                  if (attempt > 1) {
-                    sleep(60 * attempt)
-                  }
-                  attempt = attempt + 1
-                  latest = build job: "${env.JOB_NAME}-test-latest", 
propagate: false
-                  if (latest.result != 'FAILURE') break
-              }
-              if (latest.result != 'SUCCESS') unstable('test-latest failures')
-              if (latest.result == 'FAILURE') currentBuild.result='FAILURE'
-            }
-          }
-          post {
-            always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('test-latest', latest.getNumber())
-                    }
-                }
-            }
-          }
-        }
-        stage('cqlsh') {
-          steps {
-            script {
-                def attempt = 1
-                while (attempt <=2) {
-                  if (attempt > 1) {
-                    sleep(60 * attempt)
-                  }
-                  attempt = attempt + 1
-                  cqlsh = build job: "${env.JOB_NAME}-cqlsh-tests", propagate: 
false
-                  if (cqlsh.result != 'FAILURE') break
-                }
-                if (cqlsh.result != 'SUCCESS') unstable('cqlsh failures')
-                if (cqlsh.result == 'FAILURE') currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                  warnError('missing test xml files') {
-                      script {
-                          copyTestResults('cqlsh-tests', cqlsh.getNumber())
-                      }
-                  }
-              }
-            }
-        }
-        stage('simulator-dtest') {
-          steps {
-            script {
-                def attempt = 1
-                while (attempt <=2) {
-                  if (attempt > 1) {
-                    sleep(60 * attempt)
-                  }
-                  attempt = attempt + 1
-                  simulator_dtest = build job: 
"${env.JOB_NAME}-simulator-dtest", propagate: false
-                  if (simulator_dtest.result != 'FAILURE') break
-                }
-                if (simulator_dtest.result != 'SUCCESS') 
unstable('simulator-dtest failures')
-                if (simulator_dtest.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                  warnError('missing test xml files') {
-                      script {
-                          copyTestResults('simulator-dtest', 
simulator_dtest.getNumber())
-                      }
-                  }
-              }
-            }
-        }
+    stage('Summary') {
+      // generate the ci_summary.html and results_details.tar.xz artefacts
+      steps {
+        generateTestReports()
       }
     }
-    stage('Distributed Test') {
-        parallel {
-          stage('jvm-dtest') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    jvm_dtest = build job: "${env.JOB_NAME}-jvm-dtest", 
propagate: false
-                    if (jvm_dtest.result != 'FAILURE') break
-                  }
-                  if (jvm_dtest.result != 'SUCCESS') unstable('jvm-dtest 
failures')
-                  if (jvm_dtest.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                  warnError('missing test xml files') {
-                      script {
-                          copyTestResults('jvm-dtest', jvm_dtest.getNumber())
-                      }
-                  }
-              }
-            }
-          }
-          stage('jvm-dtest-novnode') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    jvm_dtest_novnode = build job: 
"${env.JOB_NAME}-jvm-dtest-novnode", propagate: false
-                    if (jvm_dtest_novnode.result != 'FAILURE') break
-                  }
-                  if (jvm_dtest_novnode.result != 'SUCCESS') 
unstable('jvm-dtest-novnode failures')
-                  if (jvm_dtest_novnode.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                  warnError('missing test xml files') {
-                      script {
-                          copyTestResults('jvm-dtest-novnode', 
jvm_dtest_novnode.getNumber())
-                      }
-                  }
-              }
-            }
-          }
-          stage('jvm-dtest-upgrade') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    jvm_dtest_upgrade = build job: 
"${env.JOB_NAME}-jvm-dtest-upgrade", propagate: false
-                    if (jvm_dtest_upgrade.result != 'FAILURE') break
-                }
-                if (jvm_dtest_upgrade.result != 'SUCCESS') 
unstable('jvm-dtest-upgrade failures')
-                if (jvm_dtest_upgrade.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                  warnError('missing test xml files') {
-                      script {
-                          copyTestResults('jvm-dtest-upgrade', 
jvm_dtest_upgrade.getNumber())
-                      }
-                  }
-              }
-            }
-          }
-          stage('jvm-dtest-upgrade-novnode') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    jvm_dtest_upgrade_novnode = build job: 
"${env.JOB_NAME}-jvm-dtest-upgrade-novnode", propagate: false
-                    if (jvm_dtest_upgrade_novnode.result != 'FAILURE') break
-                }
-                if (jvm_dtest_upgrade_novnode.result != 'SUCCESS') 
unstable('jvm-dtest-upgrade-novnode failures')
-                if (jvm_dtest_upgrade_novnode.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                  warnError('missing test xml files') {
-                      script {
-                          copyTestResults('jvm-dtest-upgrade-novnode', 
jvm_dtest_upgrade_novnode.getNumber())
-                      }
-                  }
-              }
-            }
-          }
-          stage('dtest') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    dtest = build job: "${env.JOB_NAME}-dtest", propagate: 
false
-                    if (dtest.result != 'FAILURE') break
-                }
-                if (dtest.result != 'SUCCESS') unstable('dtest failures')
-                if (dtest.result == 'FAILURE') currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                  warnError('missing test xml files') {
-                      script {
-                          copyTestResults('dtest', dtest.getNumber())
-                      }
-                  }
-              }
-            }
-          }
-          stage('dtest-large') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    dtest_large = build job: "${env.JOB_NAME}-dtest-large", 
propagate: false
-                    if (dtest_large.result != 'FAILURE') break
-                }
-                if (dtest_large.result != 'SUCCESS') unstable('dtest-large 
failures')
-                if (dtest_large.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('dtest-large', dtest_large.getNumber())
-                    }
-                }
-              }
-            }
-          }
-          stage('dtest-novnode') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    dtest_novnode = build job: 
"${env.JOB_NAME}-dtest-novnode", propagate: false
-                    if (dtest_novnode.result != 'FAILURE') break
-                }
-                if (dtest_novnode.result != 'SUCCESS') unstable('dtest-novnode 
failures')
-                if (dtest_novnode.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('dtest-novnode', 
dtest_novnode.getNumber())
-                    }
-                }
-              }
-            }
-          }
-          stage('dtest-offheap') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    dtest_offheap = build job: 
"${env.JOB_NAME}-dtest-offheap", propagate: false
-                    if (dtest_offheap.result != 'FAILURE') break
-                }
-                if (dtest_offheap.result != 'SUCCESS') unstable('dtest-offheap 
failures')
-                if (dtest_offheap.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('dtest-offheap', 
dtest_offheap.getNumber())
-                    }
-                }
-              }
-            }
-          }
-          stage('dtest-large-novnode') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    dtest_large_novnode = build job: 
"${env.JOB_NAME}-dtest-large-novnode", propagate: false
-                    if (dtest_large_novnode.result != 'FAILURE') break
-                }
-                if (dtest_large_novnode.result != 'SUCCESS') 
unstable('dtest-large-novnode failures')
-                if (dtest_large_novnode.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                warnError('missing test xml files') {
-                    script {
-                        copyTestResults('dtest-large-novnode', 
dtest_large_novnode.getNumber())
-                    }
-                }
-              }
-            }
-          }
-          stage('dtest-upgrade') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    dtest_upgrade = build job: 
"${env.JOB_NAME}-dtest-upgrade", propagate: false
-                    if (dtest_upgrade.result != 'FAILURE') break
-                }
-                if (dtest_upgrade.result != 'SUCCESS') unstable('dtest 
failures')
-                if (dtest_upgrade.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                  warnError('missing test xml files') {
-                      script {
-                          copyTestResults('dtest-upgrade', 
dtest_upgrade.getNumber())
-                      }
-                  }
-              }
-            }
-          }
-          stage('dtest-upgrade-large') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    dtest_upgrade = build job: 
"${env.JOB_NAME}-dtest-upgrade-large", propagate: false
-                    if (dtest_upgrade.result != 'FAILURE') break
-                }
-                if (dtest_upgrade.result != 'SUCCESS') unstable('dtest 
failures')
-                if (dtest_upgrade.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                  warnError('missing test xml files') {
-                      script {
-                          copyTestResults('dtest-upgrade', 
dtest_upgrade.getNumber())
-                      }
-                  }
-              }
-            }
-          }
-          stage('dtest-upgrade-novnode') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    dtest_upgrade_novnode = build job: 
"${env.JOB_NAME}-dtest-upgrade-novnode", propagate: false
-                    if (dtest_upgrade_novnode.result != 'FAILURE') break
-                }
-                if (dtest_upgrade_novnode.result != 'SUCCESS') 
unstable('dtest-upgrade-novnode failures')
-                if (dtest_upgrade_novnode.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                  warnError('missing test xml files') {
-                      script {
-                          copyTestResults('dtest-upgrade-novnode', 
dtest_upgrade_novnode.getNumber())
-                      }
-                  }
-              }
-            }
-          }
-          stage('dtest-upgrade-novnode-large') {
-            steps {
-              script {
-                  def attempt = 1
-                  while (attempt <=2) {
-                    if (attempt > 1) {
-                      sleep(60 * attempt)
-                    }
-                    attempt = attempt + 1
-                    dtest_upgrade_novnode_large = build job: 
"${env.JOB_NAME}-dtest-upgrade-novnode-large", propagate: false
-                    if (dtest_upgrade_novnode_large.result != 'FAILURE') break
-                }
-                if (dtest_upgrade_novnode_large.result != 'SUCCESS') 
unstable('dtest-upgrade-novnode-large failures')
-                if (dtest_upgrade_novnode_large.result == 'FAILURE') 
currentBuild.result='FAILURE'
-              }
-            }
-            post {
-              always {
-                  warnError('missing test xml files') {
-                      script {
-                          copyTestResults('dtest-upgrade-novnode-large', 
dtest_upgrade_novnode_large.getNumber())
-                      }
-                  }
-              }
-            }
-          }
+  }
+  post {
+    always {
+      sendNotifications()
+    }
+  }
+}
+
+///////////////////////////
+//// scripting support ////
+///////////////////////////
+
+def archsSupported() { return ["amd64", "arm64"] }
+def pythonsSupported() { return ["3.8", "3.11"] }
+def pythonDefault() { return "3.8" }
+
+def pipelineProfiles() {
+  return [
+    'packaging': ['artifacts', 'lint', 'debian', 'redhat'],
+    'skinny': ['lint', 'cqlsh-test', 'test', 'jvm-dtest', 'simulator-dtest', 
'dtest'],
+    'pre-commit': ['artifacts', 'lint', 'debian', 'redhat', 'fqltool-test', 
'cqlsh-test', 'test', 'test-latest', 'stress-test', 'test-burn', 'jvm-dtest', 
'simulator-dtest', 'dtest', 'dtest-latest'],
+    'pre-commit w/ upgrades': ['artifacts', 'lint', 'debian', 'redhat', 
'fqltool-test', 'cqlsh-test', 'test', 'test-latest', 'stress-test', 
'test-burn', 'jvm-dtest', 'jvm-dtest-upgrade', 'simulator-dtest', 'dtest', 
'dtest-novnode', 'dtest-latest', 'dtest-upgrade'],
+    'post-commit': ['artifacts', 'lint', 'debian', 'redhat', 'fqltool-test', 
'cqlsh-test', 'test-cdc', 'test', 'test-latest', 'test-compression', 
'stress-test', 'test-burn', 'long-test', 'test-oa', 
'test-system-keyspace-directory', 'jvm-dtest', 'jvm-dtest-upgrade', 
'simulator-dtest', 'dtest', 'dtest-novnode', 'dtest-latest', 'dtest-large', 
'dtest-large-novnode', 'dtest-upgrade', 'dtest-upgrade-novnode', 
'dtest-upgrade-large', 'dtest-upgrade-novnode-large'],
+    'custom': []
+  ]
+}
+
+def tasks() {
+  // Steps config
+  def buildSteps = [
+    'jar': [script: 'build-jars.sh', toCopy: null],
+    'artifacts': [script: 'build-artifacts.sh', toCopy: 
'apache-cassandra-*.tar.gz,apache-cassandra-*.jar,apache-cassandra-*.pom'],
+    'lint': [script: 'check-code.sh', toCopy: null],
+    'debian': [script: 'build-debian.sh', toCopy: 
'cassandra_*,cassandra-tools_*'],
+    'redhat': [script: 'build-redhat.sh rpm', toCopy: '*.rpm'],
+  ]
+  buildSteps.each() {
+    it.value.put('type', 'build')
+    it.value.put('size', 'small')
+    it.value.put('splits', 1)
+  }
+
+  def testSteps = [
+    'cqlsh-test': [splits: 1],
+    'fqltool-test': [splits: 1, size: 'small'],
+    'test-cdc': [splits: 8],
+    'test': [splits: 8],
+    'test-latest': [splits: 8],
+    'test-compression': [splits: 8],
+    'stress-test': [splits: 1, size: 'small'],
+    'test-burn': [splits: 2],
+    'long-test': [splits: 8],
+    'test-oa': [splits: 8],
+    'test-system-keyspace-directory': [splits: 8],
+    'jvm-dtest': [splits: 8],
+    'jvm-dtest-upgrade': [splits: 8],
+    'simulator-dtest': [splits: 1],
+    'dtest': [splits: 64, size: 'large'],
+    'dtest-novnode': [splits: 64, size: 'large'],
+    'dtest-latest': [splits: 64, size: 'large'],
+    'dtest-large': [splits: 8, size: 'large'],
+    'dtest-large-novnode': [splits: 8, size: 'large'],
+    'dtest-upgrade': [splits: 64, size: 'large'],
+    'dtest-upgrade-novnode': [splits: 64, size: 'large'],
+    'dtest-upgrade-large': [splits: 64, size: 'large'],
+    'dtest-upgrade-novnode-large': [splits: 64, size: 'large'],
+  ]
+  testSteps.each() {
+    it.value.put('type', 'test')
+    it.value.put('script', '.build/docker/run-tests.sh')
+    if (!it.value['size']) {
+      it.value.put('size', 'medium')
+    }
+    if (it.key.startsWith('dtest')) {
+      it.value.put('python-dtest', true)
+    }
+  }
+
+  def stepsMap = buildSteps + testSteps
+
+  // define matrix axes
+  def Map matrix_axes = [
+    arch: archsSupported(),
+    jdk: javaVersionsSupported(),
+    python: pythonsSupported(),
+    cython: ['yes', 'no'],
+    step: stepsMap.keySet(),
+    split: (1..testSteps.values().splits.max()).toList()
+  ]
+
+  def javaVersionDefault = javaVersionDefault()
+
+  def List _axes = getMatrixAxes(matrix_axes).findAll { axis ->
+    (isArchEnabled(axis['arch'])) && // skip disabled archs
+    (isJdkEnabled(axis['jdk'])) && // skip disabled jdks
+    (isStageEnabled(axis['step'])) && // skip disabled steps
+    !(axis['python'] != pythonDefault() && 'cqlsh-test' != axis['step']) && // 
Use only python 3.8 for all tests but cqlsh-test
+    !(axis['cython'] != 'no' && 'cqlsh-test' != axis['step']) && // cython 
only for cqlsh-test, disable for others
+    !(axis['jdk'] != javaVersionDefault && ('cqlsh-test' == axis['step'] || 
'simulator-dtest' == axis['step'] || axis['step'].contains('dtest-upgrade'))) 
&& // run cqlsh-test, simulator-dtest, *dtest-upgrade only with jdk11
+    // Disable splits for all but proper stages
+    !(axis['split'] > 1 && !stepsMap.findAll { entry -> entry.value.splits >= 
axis['split'] }.keySet().contains(axis['step'])) &&
+    // run only the build types on non-amd64
+    !(axis['arch'] != 'amd64' && stepsMap.findAll { entry -> 'build' == 
entry.value.type }.keySet().contains(axis['step']))
+  }
+
+  def Map tasks = [
+    jars: [failFast: true],
+    tests: [failFast: true]
+  ]
+
+  for (def axis in _axes) {
+    def cell = axis
+    def name = getStepName(cell, stepsMap[cell.step])
+    tasks[cell.step == "jar" ? "jars" : "tests"][name] = { ->
+      "${stepsMap[cell.step].type}"(stepsMap[cell.step], cell)
+    }
+  }
+
+  return tasks
+}
+
+@NonCPS
+def List getMatrixAxes(Map matrix_axes) {
+  List axes = []
+  matrix_axes.each { axis, values ->
+    List axisList = []
+    values.each { value ->
+      axisList << [(axis): value]
+    }
+    axes << axisList
+  }
+  axes.combinations()*.sum()
+}
+
+def getStepName(cell, command) {
+  arch = "amd64" == cell.arch ? "" : " ${cell.arch}"
+  python = "cqlsh-test" != cell.step ? "" : " python${cell.python}"
+  cython = "no" == cell.cython ? "" : " cython"
+  split = command.splits > 1 ? " ${cell.split}/${command.splits}" : ""
+  return "${cell.step}${arch} jdk${cell.jdk}${python}${cython}${split}"
+}
+
+def getJarTasks() {
+    Map jars = tasks()['jars']
+    assertJarTasks(jars)
+    return jars
+}
+
+def assertJarTasks(jars) {
+  if (jars.size() < 2) {
+    error("Nothing to build. Check parameters: jdk ${params.jdk} 
(${javaVersionsSupported()}), arch ${params.architecture} 
(${archsSupported()})")
+  }
+}
+
+def hasNonJarTasks() {
+  return tasks()['tests'].size() > 1
+}
+
+/**
+ * Return the default JDK defined by build.xml
+ **/
+def javaVersionDefault() {
+  sh (returnStdout: true, script: 'grep \'property\\s*name=\"java.default\"\' 
build.xml | sed -ne \'s/.*value=\"\\([^\"]*\\)\".*/\\1/p\'').trim()
+}
+
+/**
+ * Return the supported JDKs defined by build.xml
+ **/
+def javaVersionsSupported() {
+  sh (returnStdout: true, script: 'grep 
\'property\\s*name=\"java.supported\"\' build.xml | sed -ne 
\'s/.*value=\"\\([^\"]*\\)\".*/\\1/p\'').trim().split(',')
+}
+
+/**
+ * Is this a post-commit build (or a pre-commit build)
+ **/
+def isPostCommit() {
+  // any build of a branch found on github.com/apache/cassandra is considered 
a post-commit (post-merge) CI run
+  return params.repository && params.repository.contains("apache/cassandra") 
// no params exist first build
+}
+
+/**
+ * Are we running on ci-cassandra.apache.org ?
+ **/
+def isCanonical() {
+  return "${JENKINS_URL}".contains("ci-cassandra.apache.org")
+}
+
+def isStageEnabled(stage) {
+  return "jar" == stage || pipelineProfiles()[params.profile].contains(stage) 
|| ("custom" == params.profile && stage ==~ params.profile_custom_regexp)
+}
+
+def isArchEnabled(arch) {
+  return params.architecture == arch || "all" == params.architecture
+}
+
+def isJdkEnabled(jdk) {
+  return !params.jdk?.trim() || params.jdk.trim() == jdk
+}
+
+/**
+ * Renders build script into pipeline steps
+ **/
+def build(command, cell) {
+  def build_script = ".build/docker/${command.script}"
+  def maxAttempts = 2
+  def attempt = 0
+  def nodeExclusion = ""
+  retry(maxAttempts) {
+    attempt++
+    node(getNodeLabel(command, cell) + nodeExclusion) {
+      nodeExclusion = "&&!${NODE_NAME}"
+      withEnv(cell.collect { k, v -> "${k}=${v}" }) {
+        
ws("workspace/${JOB_NAME}/${BUILD_NUMBER}/${cell.step}/${cell.arch}/jdk-${cell.jdk}")
 {
+          cleanAgent(cell.step)
+          cleanWs()
+          fetchSource(cell.step, cell.arch, cell.jdk)
+          sh """
+              test -f .jenkins/Jenkinsfile || { echo "Invalid git 
fork/branch"; exit 1; }
+              grep -q "Jenkins CI declaration" .jenkins/Jenkinsfile || { echo 
"Only Cassandra 5.0+ supported"; exit 1; }
+              """
+          def cell_suffix = "_jdk${cell.jdk}_${cell.arch}"
+          def logfile = 
"stage-logs/${JOB_NAME}_${BUILD_NUMBER}_${cell.step}${cell_suffix}_attempt${attempt}.log"
+          def script_vars = "#!/bin/bash \n set -o pipefail ; " // pipe to tee 
needs pipefail
+          script_vars = "${script_vars} m2_dir=\'${WORKSPACE}/build/m2\'"
+          status = sh label: "RUNNING ${cell.step}...", script: 
"${script_vars} ${build_script} ${cell.jdk} 2>&1 | tee build/${logfile}", 
returnStatus: true
+          dir("build") {
+            sh "xz -f *${logfile}"
+            archiveArtifacts artifacts: "${logfile}.xz", fingerprint: true
+            copyToNightlies("${logfile}.xz", 
"${cell.step}/jdk${cell.jdk}/${cell.arch}/")
+          }
+          if (0 != status) { error("Stage ${cell.step}${cell_suffix} failed 
with exit status ${status}") }
+          if ("jar" == cell.step) { // TODO only stash the project built 
files. all dependency libraries are restored from the local maven repo using 
`ant resolver-dist-lib`
+            stash name: "${cell.arch}_${cell.jdk}", useDefaultExcludes: false 
//, includes: '**/*.jar' //, includes: 
"*.jar,classes/**,test/classes/**,tools/**"
+          }
+          dir("build") {
+            copyToNightlies("${command.toCopy}", 
"${cell.step}/jdk${cell.jdk}/${cell.arch}/")
+          }
+          cleanAgent(cell.step)
         }
+      }
     }
-    stage('Summary') {
-      steps {
-          sh "rm -fR cassandra-builds"
-          sh "git clone --depth 1 --single-branch 
https://gitbox.apache.org/repos/asf/cassandra-builds.git";
-          sh "./cassandra-builds/build-scripts/cassandra-test-report.sh"
-          junit testResults: 
'**/build/test/**/TEST*.xml,**/cqlshlib.xml,**/nosetests.xml', 
testDataPublishers: [[$class: 'StabilityTestDataPublisher']]
-
-          // the following should fail on any installation other than 
ci-cassandra.apache.org
-          //  TODO: keep jenkins infrastructure related settings in 
`cassandra_job_dsl_seed.groovy`
-          warnError('cannot send notifications') {
-              script {
-                changes = formatChanges(currentBuild.changeSets)
-                echo "changes: ${changes}"
-              }
-              slackSend channel: '#cassandra-builds', message: ":apache: 
<${env.BUILD_URL}|${currentBuild.fullDisplayName}> completed: 
${currentBuild.result}. 
<https://github.com/apache/cassandra/commit/${env.GIT_COMMIT}|${env.GIT_COMMIT}>\n${changes}"
-              emailext to: '[email protected]', subject: "Build 
complete: ${currentBuild.fullDisplayName} [${currentBuild.result}] 
${env.GIT_COMMIT}", presendScript: 
'${FILE,path="cassandra-builds/jenkins-dsl/cassandra_email_presend.groovy"}', 
body: '''
--------------------------------------------------------------------------------
-Build ${ENV,var="JOB_NAME"} #${BUILD_NUMBER} ${BUILD_STATUS}
-URL: ${BUILD_URL}
--------------------------------------------------------------------------------
-Changes:
-${CHANGES}
--------------------------------------------------------------------------------
-Failed Tests:
-${FAILED_TESTS,maxTests=500,showMessage=false,showStack=false}
--------------------------------------------------------------------------------
-For complete test report and logs see 
https://nightlies.apache.org/cassandra/${JOB_NAME}/${BUILD_NUMBER}/
-'''
-          }
-          sh "echo \"summary) cassandra-builds: `git -C cassandra-builds log 
-1 --pretty=format:'%H %an %ad %s'`\" > builds.head"
-          sh "./cassandra-builds/jenkins-dsl/print-shas.sh"
-          sh "xz TESTS-TestSuites.xml"
-          sh "wget --retry-connrefused --waitretry=1 
\"\${BUILD_URL}/timestamps/?time=HH:mm:ss&timeZone=UTC&appendLog\" -qO - > 
console.log || echo wget failed"
-          sh "xz console.log"
-          sh "echo \"For test report and logs see 
https://nightlies.apache.org/cassandra/${JOB_NAME}/${BUILD_NUMBER}/\"";
+  }
+}
+
+def test(command, cell) {
+  def splits = command.splits ? command.splits : 1
+  def maxAttempts = 2
+  def attempt = 0
+  def nodeExclusion = ""
+  retry(maxAttempts) {
+    attempt++
+    node(getNodeLabel(command, cell) + nodeExclusion) {
+      nodeExclusion = "&&!${NODE_NAME}"
+      withEnv(cell.collect { k, v -> "${k}=${v}" }) {
+        
ws("workspace/${JOB_NAME}/${BUILD_NUMBER}/${cell.step}/${cell.arch}/jdk-${cell.jdk}/python-${cell.python}")
 {
+          cleanAgent(cell.step)
+          cleanWs()
+          fetchSource(cell.step, cell.arch, cell.jdk)
+          def cell_suffix = 
"_jdk${cell.jdk}_python_${cell.python}_${cell.cython}_${cell.arch}_${cell.split}_${splits}"
+          def logfile = 
"stage-logs/${JOB_NAME}_${BUILD_NUMBER}_${cell.step}${cell_suffix}_attempt${attempt}.log"
+          // pipe to tee needs pipefail
+          def script_vars = "#!/bin/bash \n set -o pipefail ; "
+          script_vars = "${script_vars} python_version=\'${cell.python}\'"
+          script_vars = "${script_vars} m2_dir=\'${WORKSPACE}/build/m2\'"
+          if ("cqlsh-test" == cell.step) {
+            script_vars = "${script_vars} cython=\'${cell.cython}\'"
+          }
+          script_vars = fetchDTestsSource(command, script_vars)
+          buildJVMDTestJars(cell, script_vars, logfile)
+          status = sh label: "RUNNING TESTS ${cell.step}...", script: 
"${script_vars} .build/docker/run-tests.sh ${cell.step} 
'${cell.split}/${splits}' ${cell.jdk} 2>&1 | tee -a build/${logfile}", 
returnStatus: true
+          dir("build") {
+            sh "xz -f ${logfile}"
+            archiveArtifacts artifacts: "${logfile}.xz", fingerprint: true
+            copyToNightlies("${logfile}.xz", 
"${cell.step}/${cell.arch}/jdk${cell.jdk}/python${cell.python}/cython_${cell.cython}/"
 + "split_${cell.split}_${splits}".replace("/", "_"))
+          }
+          if (0 != status) { error("Stage ${cell.step}${cell_suffix} failed 
with exit status ${status}") }
+          dir("build") {
+            sh """
+                mkdir -p test/output/${cell.step}
+                find test/output -type f -name TEST*.xml -execdir mkdir -p 
jdk_${cell.jdk}/${cell.arch} ';' -execdir mv {} jdk_${cell.jdk}/${cell.arch}/{} 
';'
+                find test/output -name cqlshlib.xml -execdir mv cqlshlib.xml 
${cell.step}/cqlshlib${cell_suffix}.xml ';'
+                find test/output -name nosetests.xml -execdir mv nosetests.xml 
${cell.step}/nosetests${cell_suffix}.xml ';'
+              """
+            junit testResults: 
"test/**/TEST-*.xml,test/**/cqlshlib*.xml,test/**/nosetests*.xml", 
testDataPublishers: [[$class: 'StabilityTestDataPublisher']]
+            sh "find test/output -type f -name *.xml -exec sh -c 'xz -f {} &' 
';' ; wait "
+            archiveArtifacts artifacts: 
"test/logs/**,test/**/TEST-*.xml.xz,test/**/cqlshlib*.xml.xz,test/**/nosetests*.xml.xz",
 fingerprint: true
+            copyToNightlies("test/logs/**", 
"${cell.step}/${cell.arch}/jdk${cell.jdk}/python${cell.python}/cython_${cell.cython}/"
 + "split_${cell.split}_${splits}".replace("/", "_"))
+          }
+          cleanAgent(cell.step)
+        }
       }
-      post {
-          always {
-              sshPublisher(publishers: [sshPublisherDesc(configName: 
'Nightlies', transfers: [sshTransfer(remoteDirectory: 
'cassandra/${JOB_NAME}/${BUILD_NUMBER}/', sourceFiles: 
'console.log.xz,TESTS-TestSuites.xml.xz')])])
-          }
+    }
+  }
+}
+
+def fetchSource(stage, arch, jdk) {
+    if ("jar" == stage) {
+      checkout changelog: false, scm: scmGit(branches: [[name: 
params.branch]], extensions: [cloneOption(depth: 1, noTags: true, reference: 
'', shallow: true)], userRemoteConfigs: [[url: params.repository]])
+      sh "mkdir -p build/stage-logs"
+    } else {
+      unstash name: "${arch}_${jdk}"
+    }
+}
+
+def fetchDTestsSource(command, script_vars) {
+  if (command.containsKey('python-dtest')) {
+    checkout changelog: false, poll: false, scm: scmGit(branches: [[name: 
params.dtest_branch]], extensions: [cloneOption(depth: 1, noTags: true, 
reference: '', shallow: true), [$class: 'RelativeTargetDirectory', 
relativeTargetDir: "${WORKSPACE}/build/cassandra-dtest"]], userRemoteConfigs: 
[[url: params.dtest_repository]])
+    sh "test -f build/cassandra-dtest/requirements.txt || { echo 'Invalid 
cassandra-dtest fork/branch'; exit 1; }"
+    return "${script_vars} 
cassandra_dtest_dir='${WORKSPACE}/build/cassandra-dtest'"
+  }
+  return script_vars
+}
+
+def buildJVMDTestJars(cell, script_vars, logfile) {
+  if (cell.step.startsWith("jvm-dtest-upgrade")) {
+    try {
+      unstash name: "jvm_dtests_${cell.arch}_${cell.jdk}"
+    } catch (error) {
+      sh label: "RUNNING build_dtest_jars...", script: "${script_vars} 
.build/docker/run-tests.sh build_dtest_jars ${cell.jdk} 2>&1 | tee 
build/${logfile}"
+      stash name: "jvm_dtests_${cell.arch}_${cell.jdk}", includes: 
'**/dtest*.jar'
+    }
+  }
+}
+
+def getNodeLabel(command, cell) {
+  echo "using node label: cassandra-${cell.arch}-${command.size}"
+  return "cassandra-${cell.arch}-${command.size}"
+}
+
+def copyToNightlies(sourceFiles, remoteDirectory='') {
+  if (isCanonical() && sourceFiles?.trim()) {
+    def remotePath = remoteDirectory.startsWith("cassandra/") ? 
"${remoteDirectory}" : 
"cassandra/${JOB_NAME}/${BUILD_NUMBER}/${remoteDirectory}"
+    def attempt = 1
+    retry(9) {
+      if (attempt > 1) { sleep(60 * attempt) }
+      sshPublisher(
+      continueOnError: true, failOnError: false,
+      publishers: [
+        sshPublisherDesc(
+        configName: "Nightlies",
+        transfers: [ sshTransfer( sourceFiles: sourceFiles, remoteDirectory: 
remotePath) ]
+        )
+      ])
+    }
+    echo "archived to https://nightlies.apache.org/${remotePath}";
+  }
+}
+
+def cleanAgent(job_name) {
+  sh "hostname"
+  if (isCanonical()) {
+    def maxJobHours = 12
+    echo "Cleaning project, and pruning docker for '${job_name}' on 
${NODE_NAME}…" ;
+    sh """
+        git clean -qxdff -e build/test/jmh-result.json || true;
+        if pgrep -xa docker || pgrep -af "build/docker" || pgrep -af 
"cassandra-builds/build-scripts" ; then docker system prune --all --force 
--filter "until=${maxJobHours}h" || true ; else  docker system prune --force 
--volumes || true ;  fi;
+      """
+  }
+}
+
+/////////////////////////////////////////
+////// scripting support for summary ////
+/////////////////////////////////////////
+
+def generateTestReports() {
+  // built-in on ci-cassandra will be much faster (local transfer for 
copyArtifacts and archiveArtifacts)
+  def nodeName = isCanonical() ? "built-in" : "cassandra-medium"
+  node(nodeName) {
+    cleanWs()
+    checkout changelog: false, scm: scmGit(branches: [[name: params.branch]], 
extensions: [cloneOption(depth: 1, noTags: true, reference: '', shallow: 
true)], userRemoteConfigs: [[url: params.repository]])
+    copyArtifacts filter: 
'test/**/TEST-*.xml.xz,test/**/cqlshlib*.xml.xz,test/**/nosetests*.xml.xz', 
fingerprintArtifacts: true, projectName: env.JOB_NAME, selector: 
specific(env.BUILD_NUMBER), target: "build/", optional: true
+    if (fileExists('build/test/output')) {
+      // merge splits for each target's test report, other axes are kept 
separate
+      //   TODO parallelised for loop
+      //   TODO results_details.tar.xz needs to include all logs for failed 
tests
+      sh """
+          find build/test/output -name *.xml.xz -exec sh -c 'xz -f 
--decompress {} &' ';' ; wait
+
+          for target in \$(ls build/test/output/) ; do
+            if test -d build/test/output/\${target} ; then
+              mkdir -p build/test/reports/\${target}
+              echo "Report for \${target} (\$(find 
build/test/output/\${target} -name '*.xml' | wc -l) test files)"
+              
CASSANDRA_DOCKER_ANT_OPTS="-Dbuild.test.output.dir=build/test/output/\${target} 
-Dbuild.test.report.dir=build/test/reports/\${target}"
+              export CASSANDRA_DOCKER_ANT_OPTS
+              .build/docker/_docker_run.sh bullseye-build.docker 
ci/generate-test-report.sh
+            fi
+          done
+
+          .build/docker/_docker_run.sh bullseye-build.docker 
ci/generate-ci-summary.sh || echo "failed generate-ci-summary.sh"
+
+          tar -cf build/results_details.tar -C build/test/ reports && xz -9f 
build/results_details.tar
+          """
+
+      dir('build/') {
+        archiveArtifacts artifacts: "ci_summary.html,results_details.tar.xz", 
fingerprint: true
+        copyToNightlies('results_details.tar.xz')
       }
     }
   }
 }
 
-def copyTestResults(target, build_number) {
-    step([$class: 'CopyArtifact',
-            projectName: "${env.JOB_NAME}-${target}",
-            optional: true,
-            fingerprintArtifacts: true,
-            selector: specific("${build_number}"),
-            target: target]);
+def sendNotifications() {
+  if (isPostCommit() && isCanonical()) {
+    // the following is expected only to work on ci-cassandra.apache.org
+    try {
+      script {
+        changes = formatChangeLogChanges(currentBuild.changeSets)
+        echo "changes: ${changes}"
+      }
+      slackSend channel: '#cassandra-builds', message: ":apache: 
<${BUILD_URL}|${currentBuild.fullDisplayName}> completed: 
${currentBuild.result}. 
<https://github.com/apache/cassandra/commit/${GIT_COMMIT}|${GIT_COMMIT}>\n${changes}"
+      emailext to: '[email protected]', subject: "Build complete: 
${currentBuild.fullDisplayName} [${currentBuild.result}] ${GIT_COMMIT}", 
presendScript: 'msg.removeHeader("In-Reply-To"); 
msg.removeHeader("References")', body: emailContent()
+    } catch (Exception ex) {
+      echo 'failed to send notifications  ' + ex.toString()
+    }
+  }
 }
 
-def formatChanges(changeLogSets) {
-    def result = ''
-    for (int i = 0; i < changeLogSets.size(); i++) {
-        def entries = changeLogSets[i].items
-        for (int j = 0; j < entries.length; j++) {
-            def entry = entries[j]
-            result = result + "${entry.commitId} by ${entry.author} on ${new 
Date(entry.timestamp)}: ${entry.msg}\n"
-        }
+def formatChangeLogChanges(changeLogSets) {
+  def result = ''
+  for (int i = 0; i < changeLogSets.size(); i++) {
+    def entries = changeLogSets[i].items
+    for (int j = 0; j < entries.length; j++) {
+      def entry = entries[j]
+      result = result + "${entry.commitId} by ${entry.author} on ${new 
Date(entry.timestamp)}: ${entry.msg}\n"
     }
-    return result
+  }
+  return result
+}
+
+def emailContent() {
+  return '''
+  
-------------------------------------------------------------------------------
+  Build ${ENV,var="JOB_NAME"} #${BUILD_NUMBER} ${BUILD_STATUS}
+  URL: ${BUILD_URL}
+  
-------------------------------------------------------------------------------
+  Changes:
+  ${CHANGES}
+  
-------------------------------------------------------------------------------
+  Failed Tests:
+  ${FAILED_TESTS,maxTests=500,showMessage=false,showStack=false}
+  
-------------------------------------------------------------------------------
+  For complete test report and logs see 
https://nightlies.apache.org/cassandra/${JOB_NAME}/${BUILD_NUMBER}/
+  '''
 }
diff --git a/CHANGES.txt b/CHANGES.txt
index 6d6b3ea50c..5cd68dd179 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,4 +1,5 @@
 5.0-beta2
+ * Fix FBUtilities' parsing of gcp cos_containerd kernel versions 
(CASSANDRA-18594)
  * Clean up KeyRangeIterator classes (CASSANDRA-19428)
  * Warn clients about possible consistency violations for filtering queries 
against multiple mutable columns (CASSANDRA-19489)
  * Align buffer with commitlog segment size (CASSANDRA-19471)
diff --git a/build.xml b/build.xml
index 308e41a600..adba60d9a7 100644
--- a/build.xml
+++ b/build.xml
@@ -59,6 +59,8 @@
     <property name="build.dir" value="${basedir}/build"/>
     <property name="build.dir.lib" value="${build.dir}/lib"/>
     <property name="build.test.dir" value="${build.dir}/test"/>
+    <property name="build.test.output.dir" value="${build.test.dir}/output"/>
+    <property name="build.test.report.dir" value="${build.test.dir}/reports" />
     <property name="build.classes" value="${build.dir}/classes"/>
     <property name="build.classes.main" value="${build.classes}/main" />
     <property name="javadoc.dir" value="${build.dir}/javadoc"/>
@@ -603,8 +605,8 @@
     -->
     <target name="stress-test-some" depends="maybe-build-test" 
description="Runs stress tests">
         <testmacro inputdir="${stress.test.src}" timeout="${test.timeout}">
-          <test unless:blank="${test.methods}" name="${test.name}" 
methods="${test.methods}" todir="${build.test.dir}/output/" 
outfile="TEST-${test.name}-${test.methods}"/>
-          <test if:blank="${test.methods}" name="${test.name}" 
todir="${build.test.dir}/output/" outfile="TEST-${test.name}"/>
+          <test unless:blank="${test.methods}" name="${test.name}" 
methods="${test.methods}" todir="${build.test.output.dir}/" 
outfile="TEST-${test.name}-${test.methods}"/>
+          <test if:blank="${test.methods}" name="${test.name}" 
todir="${build.test.output.dir}/" outfile="TEST-${test.name}"/>
         </testmacro>
     </target>
 
@@ -1142,8 +1144,8 @@
         </classpath>
       </taskdef>
       <mkdir dir="${build.test.dir}/cassandra"/>
-      <mkdir dir="${build.test.dir}/output"/>
-      <mkdir dir="${build.test.dir}/output/@{testtag}"/>
+      <mkdir dir="${build.test.output.dir}"/>
+      <mkdir dir="${build.test.output.dir}/@{testtag}"/>
       <mkdir dir="${tmp.dir}"/>
       <junit-timeout fork="on" forkmode="@{forkmode}" 
failureproperty="testfailed" maxmemory="1024m" timeout="@{timeout}" 
showoutput="@{showoutput}">
         <formatter 
classname="org.apache.cassandra.CassandraXMLJUnitResultFormatter" 
extension=".xml" usefile="true"/>
@@ -1206,7 +1208,7 @@
               <exclude name="**/ant-*.jar"/>
           </fileset>
         </classpath>
-        <batchtest todir="${build.test.dir}/output/@{testtag}">
+        <batchtest todir="${build.test.output.dir}/@{testtag}">
             <fileset dir="@{inputdir}" includes="@{filter}" 
excludes="@{exclude}"/>
             <filelist dir="@{inputdir}" files="@{filelist}"/>
         </batchtest>
@@ -1375,8 +1377,8 @@
   -->
   <target name="testsome" depends="maybe-build-test" description="Execute 
specific unit tests" >
     <testmacro inputdir="${test.unit.src}" timeout="${test.timeout}">
-      <test if="withMethods" name="${test.name}" methods="${test.methods}" 
todir="${build.test.dir}/output/" outfile="TEST-${test.name}-${test.methods}"/>
-      <test if="withoutMethods" name="${test.name}" 
todir="${build.test.dir}/output/" outfile="TEST-${test.name}"/>
+      <test if="withMethods" name="${test.name}" methods="${test.methods}" 
todir="${build.test.output.dir}/" outfile="TEST-${test.name}-${test.methods}"/>
+      <test if="withoutMethods" name="${test.name}" 
todir="${build.test.output.dir}/" outfile="TEST-${test.name}"/>
       <jvmarg value="-Dlegacy-sstable-root=${test.data}/legacy-sstables"/>
       <jvmarg 
value="-Dinvalid-legacy-sstable-root=${test.data}/invalid-legacy-sstables"/>
       <jvmarg value="-Dcassandra.ring_delay_ms=1000"/>
@@ -1391,8 +1393,8 @@
   -->
   <target name="long-testsome" depends="maybe-build-test" description="Execute 
specific long unit tests" >
     <testmacro inputdir="${test.long.src}" timeout="${test.long.timeout}">
-      <test if="withMethods" name="${test.name}" methods="${test.methods}" 
todir="${build.test.dir}/output/" outfile="TEST-${test.name}-${test.methods}"/>
-      <test if="withoutMethods" name="${test.name}" 
todir="${build.test.dir}/output/" outfile="TEST-${test.name}"/>
+      <test if="withMethods" name="${test.name}" methods="${test.methods}" 
todir="${build.test.output.dir}/" outfile="TEST-${test.name}-${test.methods}"/>
+      <test if="withoutMethods" name="${test.name}" 
todir="${build.test.output.dir}/" outfile="TEST-${test.name}"/>
       <jvmarg value="-Dcassandra.ring_delay_ms=1000"/>
       <jvmarg value="-Dcassandra.tolerate_sstable_size=true"/>
     </testmacro>
@@ -1404,8 +1406,8 @@
   -->
   <target name="burn-testsome" depends="maybe-build-test" description="Execute 
specific burn unit tests" >
     <testmacro inputdir="${test.burn.src}" timeout="${test.burn.timeout}">
-      <test if="withMethods" name="${test.name}" methods="${test.methods}" 
todir="${build.test.dir}/output/" outfile="TEST-${test.name}-${test.methods}"/>
-      <test if="withoutMethods" name="${test.name}" 
todir="${build.test.dir}/output/" outfile="TEST-${test.name}"/>
+      <test if="withMethods" name="${test.name}" methods="${test.methods}" 
todir="${build.test.output.dir}/" outfile="TEST-${test.name}-${test.methods}"/>
+      <test if="withoutMethods" name="${test.name}" 
todir="${build.test.output.dir}/" outfile="TEST-${test.name}"/>
       <jvmarg 
value="-Dlogback.configurationFile=test/conf/logback-burntest.xml"/>
     </testmacro>
   </target>
@@ -1502,7 +1504,7 @@
     <sequential>
       <echo message="running CQL tests"/>
       <mkdir dir="${build.test.dir}/cassandra"/>
-      <mkdir dir="${build.test.dir}/output"/>
+      <mkdir dir="${build.test.output.dir}"/>
       <junit fork="on" forkmode="once" failureproperty="testfailed" 
maxmemory="1024m" timeout="${test.timeout}">
         <formatter type="brief" usefile="false"/>
         <jvmarg value="-Dstorage-config=${test.conf}"/>
@@ -1521,7 +1523,7 @@
             <include name="**/*.jar" />
           </fileset>
         </classpath>
-        <batchtest todir="${build.test.dir}/output">
+        <batchtest todir="${build.test.output.dir}">
             <fileset dir="${test.unit.src}" includes="**/cql3/*Test.java">
                 <contains text="CQLTester" casesensitive="yes"/>
             </fileset>
@@ -1548,7 +1550,7 @@
     <sequential>
       <echo message="running ${test.methods} tests from ${test.name}"/>
       <mkdir dir="${build.test.dir}/cassandra"/>
-      <mkdir dir="${build.test.dir}/output"/>
+      <mkdir dir="${build.test.output.dir}"/>
       <junit fork="on" forkmode="once" failureproperty="testfailed" 
maxmemory="1024m" timeout="${test.timeout}">
         <formatter type="brief" usefile="false"/>
         <jvmarg value="-Dstorage-config=${test.conf}"/>
@@ -1567,8 +1569,8 @@
             <include name="**/*.jar" />
           </fileset>
         </classpath>
-        <test unless:blank="${test.methods}" 
name="org.apache.cassandra.cql3.${test.name}" methods="${test.methods}" 
todir="${build.test.dir}/output"/>
-        <test if:blank="${test.methods}" 
name="org.apache.cassandra.cql3.${test.name}" todir="${build.test.dir}/output"/>
+        <test unless:blank="${test.methods}" 
name="org.apache.cassandra.cql3.${test.name}" methods="${test.methods}" 
todir="${build.test.output.dir}"/>
+        <test if:blank="${test.methods}" 
name="org.apache.cassandra.cql3.${test.name}" todir="${build.test.output.dir}"/>
       </junit>
     </sequential>
   </target>
@@ -1664,15 +1666,6 @@
     <testhelper testdelegate="testlist"/>
   </target>
 
-  <target name="generate-test-report" description="Generates JUnit's HTML 
report from results already in build/output">
-      <junitreport todir="${build.test.dir}">
-        <fileset dir="${build.test.dir}/output">
-          <include name="**/TEST-*.xml"/>
-        </fileset>
-        <report format="frames" todir="${build.test.dir}/junitreport"/>
-      </junitreport>
-  </target>
-
   <!-- run a list of tests as provided in -Dtest.classlistfile (or default of 
'testnames.txt')
   The class list file should be one test class per line, with the path 
starting after test/unit
   e.g. org/apache/cassandra/hints/HintMessageTest.java -->
@@ -1825,8 +1818,8 @@
     -->
   <target name="test-jvm-dtest-some" depends="maybe-build-test" 
description="Execute some in-jvm dtests">
     <testmacro inputdir="${test.distributed.src}" 
timeout="${test.distributed.timeout}" forkmode="once" showoutput="true">
-      <test unless:blank="${test.methods}" name="${test.name}" 
methods="${test.methods}" todir="${build.test.dir}/output/" 
outfile="TEST-${test.name}-${test.methods}"/>
-      <test if:blank="${test.methods}" name="${test.name}" 
todir="${build.test.dir}/output/" outfile="TEST-${test.name}"/>
+      <test unless:blank="${test.methods}" name="${test.name}" 
methods="${test.methods}" todir="${build.test.output.dir}/" 
outfile="TEST-${test.name}-${test.methods}"/>
+      <test if:blank="${test.methods}" name="${test.name}" 
todir="${build.test.output.dir}/" outfile="TEST-${test.name}"/>
       <jvmarg value="-Dlogback.configurationFile=test/conf/logback-dtest.xml"/>
       <jvmarg value="-Dcassandra.ring_delay_ms=10000"/>
       <jvmarg value="-Dcassandra.tolerate_sstable_size=true"/>
@@ -1837,8 +1830,8 @@
 
   <target name="test-jvm-dtest-latest-some" depends="maybe-build-test" 
description="Execute some in-jvm dtests with latest configuration">
     <testmacro inputdir="${test.distributed.src}" 
timeout="${test.distributed.timeout}" forkmode="once" showoutput="true">
-      <test unless:blank="${test.methods}" name="${test.name}" 
methods="${test.methods}" todir="${build.test.dir}/output/" 
outfile="TEST-${test.name}-${test.methods}"/>
-      <test if:blank="${test.methods}" name="${test.name}" 
todir="${build.test.dir}/output/" outfile="TEST-${test.name}"/>
+      <test unless:blank="${test.methods}" name="${test.name}" 
methods="${test.methods}" todir="${build.test.output.dir}/" 
outfile="TEST-${test.name}-${test.methods}"/>
+      <test if:blank="${test.methods}" name="${test.name}" 
todir="${build.test.output.dir}/" outfile="TEST-${test.name}"/>
       <jvmarg value="-Djvm_dtests.latest=true"/>
       <jvmarg value="-Dlogback.configurationFile=test/conf/logback-dtest.xml"/>
       <jvmarg value="-Dcassandra.ring_delay_ms=10000"/>
@@ -1847,20 +1840,20 @@
     </testmacro>
   </target>
 
-
-    <target name="generate-unified-test-report" description="Merge all unit 
xml files into one, generate html pages, and print summary test numbers">
-      <junitreport todir="${build.dir}">
-          <fileset dir="${build.test.dir}/output">
-              <include name="**/TEST*.xml"/>
-              <include name="**/cqlshlib.xml"/>
-              <include name="**/nosetests.xml"/>
+  <target name="generate-test-report" description="Merge all unit xml files 
into one, generate html pages, and print summary test numbers">
+      <echo message="Generating Test Summary for test files found under 
${build.test.output.dir}" />
+      <mkdir dir="${build.test.report.dir}/unitTestReport"/>
+      <junitreport todir="${build.test.report.dir}/unitTestReport">
+          <fileset dir="${build.test.output.dir}">
+              <include name="**/TEST-*.xml"/>
+              <include name="**/cqlshlib*.xml"/>
+              <include name="**/nosetests*.xml"/>
           </fileset>
-          <!-- FIXME this can easily OOM, need a workaround-->
-          <report todir="${build.test.dir}/html" />
+          <report todir="${build.test.report.dir}/unitTestReport/html" 
unless:true="${print-summary.skip}" />
       </junitreport>
       <!-- concat the report through a filter chain to extract what you want 
-->
-      <concat>
-          <fileset file="${build.test.dir}/html/overview-summary.html" />
+      <concat unless:true="${print-summary.skip}">
+          <fileset 
file="${build.test.report.dir}/unitTestReport/html/overview-summary.html" />
           <filterchain>
               <linecontainsregexp>
                   <regexp pattern='title="Display all tests"' />
diff --git a/pylib/cassandra-cqlsh-tests.sh b/pylib/cassandra-cqlsh-tests.sh
index 24d00894d2..c80ba43872 100755
--- a/pylib/cassandra-cqlsh-tests.sh
+++ b/pylib/cassandra-cqlsh-tests.sh
@@ -75,6 +75,7 @@ if [ "$cython" = "yes" ]; then
 else
     TESTSUITE_NAME="${TESTSUITE_NAME}.no_cython"
 fi
+TESTSUITE_NAME="${TESTSUITE_NAME}.$(uname -m)"
 
 ################################
 #
@@ -116,7 +117,7 @@ sed -r "s/<[\/]?testsuites>//g" 
${BUILD_DIR}/test/output/cqlshlib.xml > /tmp/cql
 cat /tmp/cqlshlib.xml > ${BUILD_DIR}/test/output/cqlshlib.xml
 
 # don't do inline sed for linux+mac compat
-sed "s/testsuite errors=\(\".*\"\) failures=\(\".*\"\) hostname=\(\".*\"\) 
name=\"pytest\"/testsuite errors=\1 failures=\2 hostname=\3 
name=\"${TESTSUITE_NAME}\"/g" ${BUILD_DIR}/test/output/cqlshlib.xml > 
/tmp/cqlshlib.xml
+sed "s/testsuite name=\"pytest\"/testsuite name=\"${TESTSUITE_NAME}\"/g" 
${BUILD_DIR}/test/output/cqlshlib.xml > /tmp/cqlshlib.xml
 cat /tmp/cqlshlib.xml > ${BUILD_DIR}/test/output/cqlshlib.xml
 sed "s/testcase classname=\"cqlshlib./testcase 
classname=\"${TESTSUITE_NAME}./g" ${BUILD_DIR}/test/output/cqlshlib.xml > 
/tmp/cqlshlib.xml
 cat /tmp/cqlshlib.xml > ${BUILD_DIR}/test/output/cqlshlib.xml
diff --git a/src/java/org/apache/cassandra/utils/FBUtilities.java 
b/src/java/org/apache/cassandra/utils/FBUtilities.java
index 001ba2ceec..eeefab136f 100644
--- a/src/java/org/apache/cassandra/utils/FBUtilities.java
+++ b/src/java/org/apache/cassandra/utils/FBUtilities.java
@@ -66,11 +66,12 @@ import com.google.common.base.Joiner;
 import com.google.common.base.Preconditions;
 import com.google.common.base.Suppliers;
 import com.google.common.collect.ImmutableList;
+import com.vdurmont.semver4j.Semver;
+import com.vdurmont.semver4j.SemverException;
 import org.apache.commons.lang3.StringUtils;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.vdurmont.semver4j.Semver;
 import org.apache.cassandra.audit.IAuditLogger;
 import org.apache.cassandra.auth.AllowAllNetworkAuthorizer;
 import org.apache.cassandra.auth.IAuthenticator;
@@ -1405,15 +1406,21 @@ public class FBUtilities
         if (!isLinux)
             return null;
 
+        String output = null;
         try
         {
-            String output = exec(Map.of(), Duration.ofSeconds(5), 1024, 1024, 
"uname", "-r");
+            output = exec(Map.of(), Duration.ofSeconds(5), 1024, 1024, 
"uname", "-r");
 
             if (output.isEmpty())
                 throw new RuntimeException("Error while trying to get kernel 
version, 'uname -r' returned empty output");
 
             return parseKernelVersion(output);
         }
+        catch (SemverException e)
+        {
+            logger.error("SemverException parsing {}", output, e);
+            throw e;
+        }
         catch (IOException | TimeoutException e)
         {
             throw new RuntimeException("Error while trying to get kernel 
version", e);
@@ -1429,6 +1436,7 @@ public class FBUtilities
     static Semver parseKernelVersion(String versionString)
     {
         Preconditions.checkNotNull(versionString, "kernel version cannot be 
null");
+        // ignore blank lines
         try (Scanner scanner = new Scanner(versionString))
         {
             while (scanner.hasNextLine())
@@ -1436,6 +1444,11 @@ public class FBUtilities
                 String version = scanner.nextLine().trim();
                 if (version.isEmpty())
                     continue;
+
+                if (version.endsWith("+"))
+                        // gcp's cos_containerd has a trailing +
+                        version = StringUtils.chop(version);
+
                 return new Semver(version, Semver.SemverType.LOOSE);
             }
         }
diff --git 
a/test/unit/org/apache/cassandra/CassandraXMLJUnitResultFormatter.java 
b/test/unit/org/apache/cassandra/CassandraXMLJUnitResultFormatter.java
index 6f821c3d50..d59be7790c 100644
--- a/test/unit/org/apache/cassandra/CassandraXMLJUnitResultFormatter.java
+++ b/test/unit/org/apache/cassandra/CassandraXMLJUnitResultFormatter.java
@@ -109,6 +109,9 @@ public class CassandraXMLJUnitResultFormatter implements 
JUnitResultFormatter, X
      */
     private final Hashtable<String, Element> testElements = new 
Hashtable<String, Element>();
 
+    private Element propsElement;
+    private Element systemOutputElement;
+
     /**
      * tests that failed.
      */
@@ -142,12 +145,12 @@ public class CassandraXMLJUnitResultFormatter implements 
JUnitResultFormatter, X
 
     /** {@inheritDoc}. */
     public void setSystemOutput(final String out) {
-        formatOutput(SYSTEM_OUT, out);
+        systemOutputElement = formatOutput(SYSTEM_OUT, out);
     }
 
     /** {@inheritDoc}. */
     public void setSystemError(final String out) {
-        formatOutput(SYSTEM_ERR, out);
+        rootElement.appendChild(formatOutput(SYSTEM_ERR, out));
     }
 
     /**
@@ -170,8 +173,7 @@ public class CassandraXMLJUnitResultFormatter implements 
JUnitResultFormatter, X
         rootElement.setAttribute(HOSTNAME, getHostname());
 
         // Output properties
-        final Element propsElement = doc.createElement(PROPERTIES);
-        rootElement.appendChild(propsElement);
+        propsElement = doc.createElement(PROPERTIES);
         final Properties props = suite.getProperties();
         if (props != null) {
             final Enumeration e = props.propertyNames();
@@ -212,8 +214,13 @@ public class CassandraXMLJUnitResultFormatter implements 
JUnitResultFormatter, X
         rootElement.setAttribute(ATTR_FAILURES, "" + suite.failureCount());
         rootElement.setAttribute(ATTR_ERRORS, "" + suite.errorCount());
         rootElement.setAttribute(ATTR_SKIPPED, "" + suite.skipCount());
-        rootElement.setAttribute(
-            ATTR_TIME, "" + (suite.getRunTime() / ONE_SECOND));
+        rootElement.setAttribute(ATTR_TIME, "" + (suite.getRunTime() / 
ONE_SECOND));
+        if (suite.failureCount() > 0 || suite.errorCount() > 0)
+        {
+            // only include properties and system-out if there's failure/error
+            rootElement.appendChild(propsElement);
+            rootElement.appendChild(systemOutputElement);
+        }
         if (out != null) {
             Writer wri = null;
             try {
@@ -351,10 +358,10 @@ public class CassandraXMLJUnitResultFormatter implements 
JUnitResultFormatter, X
         nested.appendChild(trace);
     }
 
-    private void formatOutput(final String type, final String output) {
+    private Element formatOutput(final String type, final String output) {
         final Element nested = doc.createElement(type);
-        rootElement.appendChild(nested);
         nested.appendChild(doc.createCDATASection(output));
+        return nested;
     }
 
     public void testIgnored(final Test test) {
diff --git a/test/unit/org/apache/cassandra/utils/FBUtilitiesTest.java 
b/test/unit/org/apache/cassandra/utils/FBUtilitiesTest.java
index 7b2bd88afd..fc027954bd 100644
--- a/test/unit/org/apache/cassandra/utils/FBUtilitiesTest.java
+++ b/test/unit/org/apache/cassandra/utils/FBUtilitiesTest.java
@@ -38,13 +38,13 @@ import java.util.concurrent.Future;
 import java.util.concurrent.TimeUnit;
 
 import com.google.common.primitives.Ints;
+import com.vdurmont.semver4j.Semver;
 import org.junit.Assert;
 import org.junit.Assume;
 import org.junit.Test;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
-import com.vdurmont.semver4j.Semver;
 import org.apache.cassandra.config.Config;
 import org.apache.cassandra.config.DatabaseDescriptor;
 import org.apache.cassandra.db.marshal.AbstractType;
@@ -386,6 +386,9 @@ public class FBUtilitiesTest
 
         
assertThatExceptionOfType(IllegalArgumentException.class).isThrownBy(() -> 
parseKernelVersion("\n \n"))
                                                                  
.withMessageContaining("no version found");
+
+        // gcp's cos_containerd example
+        
assertThat(parseKernelVersion("5.15.133+").toString()).isEqualTo("5.15.133");
     }
 
     @Test


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]


Reply via email to