This is an automated email from the ASF dual-hosted git repository.
wusheng pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/skywalking-graalvm-distro.git
The following commit(s) were added to refs/heads/main by this push:
new 5fb77c5 Set version to 0.1.0-SNAPSHOT, add release tooling and
benchmark tuning
5fb77c5 is described below
commit 5fb77c5d50eab591695621f5077f4d4dfa8d8f03
Author: Wu Sheng <[email protected]>
AuthorDate: Sat Mar 14 23:53:26 2026 +0800
Set version to 0.1.0-SNAPSHOT, add release tooling and benchmark tuning
- Bump Maven version from 1.0.0-SNAPSHOT to 0.1.0-SNAPSHOT across all
modules
- Move release.sh to release/ folder, add pre-release.sh for version bumping
- Reduce benchmark traffic rate from ~20 RPS to ~12 RPS (60%)
- Update changes.md with release tooling, benchmark, and CI/CD sections
---
.gitignore | 5 +
README.md | 47 ++-
benchmark/benchmark.sh | 224 ++++++++++++
benchmark/case.md | 243 +++++++++++++
benchmark/cases/graalvm-resource-usage/run.sh | 401 +++++++++++++++++++++
benchmark/docker-compose-graalvm.yml | 36 ++
benchmark/docker-compose-jvm.yml | 36 ++
benchmark/env | 38 ++
.../istio-cluster_graalvm-banyandb/kind.yaml | 5 +
.../istio-cluster_graalvm-banyandb/setup.sh | 388 ++++++++++++++++++++
.../traffic-gen.yaml | 38 ++
.../istio-cluster_graalvm-banyandb/values.yaml | 6 +
.../istio-cluster_oap-banyandb/kind.yaml | 5 +
.../envs-setup/istio-cluster_oap-banyandb/setup.sh | 389 ++++++++++++++++++++
.../istio-cluster_oap-banyandb/traffic-gen.yaml | 38 ++
.../istio-cluster_oap-banyandb/values.yaml | 6 +
benchmark/run.sh | 138 +++++++
build-tools/build-common/pom.xml | 2 +-
build-tools/config-generator/pom.xml | 2 +-
build-tools/pom.xml | 2 +-
build-tools/precompiler/pom.xml | 2 +-
changes/changes.md | 20 +-
docs/README.md | 23 +-
oap-graalvm-native/pom.xml | 2 +-
oap-graalvm-server/pom.xml | 2 +-
.../agent-analyzer-for-graalvm/pom.xml | 2 +-
.../aws-firehose-receiver-for-graalvm/pom.xml | 2 +-
.../cilium-fetcher-for-graalvm/pom.xml | 2 +-
.../ebpf-receiver-for-graalvm/pom.xml | 2 +-
.../envoy-metrics-receiver-for-graalvm/pom.xml | 2 +-
.../health-checker-for-graalvm/pom.xml | 2 +-
.../library-module-for-graalvm/pom.xml | 2 +-
.../library-util-for-graalvm/pom.xml | 2 +-
.../log-analyzer-for-graalvm/pom.xml | 2 +-
.../meter-analyzer-for-graalvm/pom.xml | 2 +-
.../otel-receiver-for-graalvm/pom.xml | 2 +-
oap-libs-for-graalvm/pom.xml | 2 +-
.../server-core-for-graalvm/pom.xml | 2 +-
.../server-starter-for-graalvm/pom.xml | 2 +-
.../status-query-for-graalvm/pom.xml | 2 +-
pom.xml | 2 +-
release/pre-release.sh | 109 ++++++
release.sh => release/release.sh | 5 +-
43 files changed, 2196 insertions(+), 48 deletions(-)
diff --git a/.gitignore b/.gitignore
index 9d500fa..0e2dffe 100644
--- a/.gitignore
+++ b/.gitignore
@@ -17,3 +17,8 @@ release-package/
# OS
.DS_Store
+
+# Benchmark runtime artifacts
+benchmark/reports/
+benchmark/results/
+benchmark/.istio/
diff --git a/README.md b/README.md
index 7c62161..37ebd63 100644
--- a/README.md
+++ b/README.md
@@ -1,33 +1,46 @@
# SkyWalking GraalVM Distro (Experimental)
<img src="http://skywalking.apache.org/assets/logo.svg" alt="Sky Walking logo"
height="90px" align="right" />
-SkyWalking GraalVM Distro is a re-distribution of the official Apache
SkyWalking OAP server, targeting GraalVM native image on JDK 25.
+[Apache SkyWalking](https://skywalking.apache.org/) is an open-source APM and
observability platform for
+distributed systems, providing metrics, tracing, logging, and profiling
capabilities.
-This distro moves all dynamic code generation (Javassist, classpath scanning)
from runtime to build time, producing a ~203MB native binary with full OAP
feature set. No upstream source modifications required.
+**SkyWalking GraalVM Distro** is a distribution of the same Apache SkyWalking
OAP server, compiled as a
+GraalVM native image on JDK 25. It moves all dynamic code generation (OAL,
MAL, LAL, Hierarchy via
+ANTLR4 + Javassist) and classpath scanning from runtime to build time,
producing a ~203MB self-contained
+native binary with the full OAP feature set. No upstream source modifications
required.
-## Quick Start
+### Key Differences from Upstream
-```bash
-git clone --recurse-submodules
https://github.com/apache/skywalking-graalvm-distro.git
-cd skywalking-graalvm-distro
-
-# First time: install upstream SkyWalking to Maven cache
-JAVA_HOME=/path/to/graalvm-jdk-25 make init-skywalking
+- **Native binary** instead of JVM — instant startup, ~512MB memory footprint
+- **BanyanDB only** — the sole supported storage backend
+- **Fixed module set** — modules selected at build time, no SPI discovery
+- **Pre-compiled DSL** — all DSL rules compiled at build time
-# Build distro (precompiler + tests + server)
-JAVA_HOME=/path/to/graalvm-jdk-25 make build-distro
+All existing SkyWalking agents, UI, and tooling work unchanged.
-# Build native image
-JAVA_HOME=/path/to/graalvm-jdk-25 make native-image
+### Quick Start
-# Run with Docker Compose (BanyanDB + OAP native)
-docker compose -f docker/docker-compose.yml up
+```bash
+docker run -d \
+ -p 12800:12800 \
+ -p 11800:11800 \
+ -e SW_STORAGE_BANYANDB_TARGETS=<banyandb-host>:17912 \
+ apache/skywalking-graalvm-distro:latest
```
+### Docker Images
+
+| Registry | Image |
+|----------|-------|
+| Docker Hub | `apache/skywalking-graalvm-distro` |
+| GHCR | `ghcr.io/apache/skywalking-graalvm-distro` |
+
+Available for `linux/amd64` and `linux/arm64`. macOS arm64 (Apple Silicon)
native binary is available on the [GitHub
Release](https://github.com/apache/skywalking-graalvm-distro/releases) page.
+
## Documentation
-Full documentation is available in [docs/](docs/) and published on the project
website.
+Full documentation is available at
[skywalking.apache.org/docs](https://skywalking.apache.org/docs/#ExperimentalGraalVMDistro).
## License
-Apache 2.0
+[Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0)
diff --git a/benchmark/benchmark.sh b/benchmark/benchmark.sh
new file mode 100755
index 0000000..d20f664
--- /dev/null
+++ b/benchmark/benchmark.sh
@@ -0,0 +1,224 @@
+#!/bin/bash
+set -euo pipefail
+
+SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
+ITERATIONS="${1:-5}"
+IDLE_WAIT="${2:-60}"
+
+RESULTS_DIR="$SCRIPT_DIR/results"
+mkdir -p "$RESULTS_DIR"
+
+JVM_COMPOSE="$SCRIPT_DIR/docker-compose-jvm.yml"
+GRAALVM_COMPOSE="$SCRIPT_DIR/docker-compose-graalvm.yml"
+
+cleanup_oap_only() {
+ local compose_file=$1
+ docker compose -f "$compose_file" stop oap 2>/dev/null || true
+ docker compose -f "$compose_file" rm -f oap 2>/dev/null || true
+}
+
+cleanup_all() {
+ local compose_file=$1
+ docker compose -f "$compose_file" down -v 2>/dev/null || true
+}
+
+wait_for_banyandb() {
+ echo " Waiting for BanyanDB to be healthy..."
+ local max_wait=120
+ local elapsed=0
+ until docker exec banyandb sh -c 'nc -nz 127.0.0.1 17912' 2>/dev/null; do
+ sleep 1
+ elapsed=$((elapsed + 1))
+ if [ "$elapsed" -ge "$max_wait" ]; then
+ echo " ERROR: BanyanDB did not become healthy within ${max_wait}s"
+ return 1
+ fi
+ done
+ echo " BanyanDB is ready."
+}
+
+# Wait for OAP to log "listening on 11800" — the real ready signal
+wait_for_oap_ready() {
+ local max_wait=180
+ local elapsed=0
+ echo " Waiting for OAP gRPC server to be ready..."
+ until docker logs oap 2>&1 | grep -q "listening on 11800"; do
+ sleep 1
+ elapsed=$((elapsed + 1))
+ if [ "$elapsed" -ge "$max_wait" ]; then
+ echo " ERROR: OAP did not start within ${max_wait}s"
+ docker logs oap 2>&1 | tail -10
+ return 1
+ fi
+ done
+}
+
+millis_now() {
+ perl -MTime::HiRes=time -e 'printf "%d\n", time()*1000'
+}
+
+# Extract boot time from OAP logs: first timestamp to "listening on 11800"
timestamp
+# Returns milliseconds
+extract_boot_time_from_logs() {
+ local logs
+ logs=$(docker logs oap 2>&1)
+
+ # JVM OAP format: "2026-03-14 06:10:28,351 ..."
+ # GraalVM format: "2026-03-14 06:20:17,158 - ..."
+ # Also possible: "2026-03-14T06:10:28.119575555Z ..."
+
+ # Find the first application log timestamp (skip entrypoint script lines)
+ local first_ts
+ first_ts=$(echo "$logs" | grep -oE '^[0-9]{4}-[0-9]{2}-[0-9]{2}[T
][0-9]{2}:[0-9]{2}:[0-9]{2}[,\.][0-9]+' | head -1)
+
+ # Find the "listening on 11800" timestamp
+ local ready_ts
+ ready_ts=$(echo "$logs" | grep "listening on 11800" | grep -oE
'[0-9]{4}-[0-9]{2}-[0-9]{2}[T ][0-9]{2}:[0-9]{2}:[0-9]{2}[,\.][0-9]+' | head -1)
+
+ if [ -z "$first_ts" ] || [ -z "$ready_ts" ]; then
+ echo "0"
+ return
+ fi
+
+ # Normalize timestamps: replace T with space, replace comma with dot
+ first_ts=$(echo "$first_ts" | sed 's/T/ /; s/,/./')
+ ready_ts=$(echo "$ready_ts" | sed 's/T/ /; s/,/./')
+
+ # Calculate difference in milliseconds using perl
+ perl -e '
+ use POSIX qw(mktime);
+ sub parse_ts {
+ my $ts = shift;
+ if ($ts =~ /(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2})\.(\d+)/) {
+ my $epoch = mktime($6, $5, $4, $3, $2-1, $1-1900);
+ my $frac = substr($7 . "000", 0, 3); # take first 3 digits as millis
+ return $epoch * 1000 + $frac;
+ }
+ return 0;
+ }
+ my $start = parse_ts($ARGV[0]);
+ my $end = parse_ts($ARGV[1]);
+ print $end - $start . "\n";
+ ' "$first_ts" "$ready_ts"
+}
+
+collect_memory() {
+ local output_file=$1
+ echo " Collecting memory samples (5 samples, 10s apart)..."
+ for s in $(seq 1 5); do
+ docker stats oap --no-stream --format '{{.MemUsage}}' >> "$output_file"
+ sleep 10
+ done
+}
+
+save_oap_logs() {
+ local label=$1
+ local boot_type=$2
+ local iteration=$3
+ local logfile="$RESULTS_DIR/${label}_${boot_type}_logs_${iteration}.txt"
+ docker logs oap > "$logfile" 2>&1 || true
+}
+
+run_benchmark() {
+ local label=$1
+ local compose_file=$2
+ local cold_startup_file="$RESULTS_DIR/${label}_cold_startup.txt"
+ local warm_startup_file="$RESULTS_DIR/${label}_warm_startup.txt"
+ local cold_memory_file="$RESULTS_DIR/${label}_cold_memory.txt"
+ local warm_memory_file="$RESULTS_DIR/${label}_warm_memory.txt"
+
+ > "$cold_startup_file"
+ > "$warm_startup_file"
+ > "$cold_memory_file"
+ > "$warm_memory_file"
+
+ echo "=== Benchmarking: $label ($ITERATIONS iterations) ==="
+
+ for i in $(seq 1 "$ITERATIONS"); do
+ echo ""
+ echo "--- Run $i/$ITERATIONS ---"
+
+ # ---- COLD BOOT: fresh BanyanDB (container destroyed), tables will be
created ----
+ echo "[COLD BOOT] Starting fresh (tables will be created)..."
+ cleanup_all "$compose_file"
+ sleep 3
+
+ docker compose -f "$compose_file" up -d banyandb
+ wait_for_banyandb
+
+ docker compose -f "$compose_file" up -d oap
+ wait_for_oap_ready
+
+ ms=$(extract_boot_time_from_logs)
+ echo " Cold startup (from logs): ${ms}ms"
+ echo "$ms" >> "$cold_startup_file"
+ save_oap_logs "$label" "cold" "$i"
+
+ echo " Waiting ${IDLE_WAIT}s for idle state..."
+ sleep "$IDLE_WAIT"
+ collect_memory "$cold_memory_file"
+
+ # ---- WARM BOOT: stop OAP only, BanyanDB keeps data, restart OAP ----
+ echo "[WARM BOOT] Restarting OAP only (tables already exist)..."
+ cleanup_oap_only "$compose_file"
+ sleep 3
+
+ docker compose -f "$compose_file" up -d oap
+ wait_for_oap_ready
+
+ ms=$(extract_boot_time_from_logs)
+ echo " Warm startup (from logs): ${ms}ms"
+ echo "$ms" >> "$warm_startup_file"
+ save_oap_logs "$label" "warm" "$i"
+
+ echo " Waiting ${IDLE_WAIT}s for idle state..."
+ sleep "$IDLE_WAIT"
+ collect_memory "$warm_memory_file"
+
+ cleanup_all "$compose_file"
+ sleep 3
+ done
+
+ echo ""
+ echo "=== $label complete ==="
+ echo ""
+}
+
+print_summary() {
+ local label=$1
+
+ for boot_type in cold warm; do
+ local startup_file="$RESULTS_DIR/${label}_${boot_type}_startup.txt"
+ local memory_file="$RESULTS_DIR/${label}_${boot_type}_memory.txt"
+
+ echo "--- $label ($boot_type boot) ---"
+ echo "Startup times (ms):"
+ sort -n "$startup_file" | while read -r ms; do echo " $ms"; done
+
+ local median
+ median=$(sort -n "$startup_file" | awk '{a[NR]=$1} END{print
a[int((NR+1)/2)]}')
+ echo " Median: ${median}ms"
+
+ echo "Memory samples:"
+ cat "$memory_file" | while read -r line; do echo " $line"; done
+ echo ""
+ done
+}
+
+# Pull images first to exclude pull time from benchmark
+echo "Pulling images..."
+docker compose -f "$JVM_COMPOSE" pull
+docker compose -f "$GRAALVM_COMPOSE" pull
+echo ""
+
+# Run benchmarks
+run_benchmark "jvm" "$JVM_COMPOSE"
+run_benchmark "graalvm" "$GRAALVM_COMPOSE"
+
+# Summary
+echo "========================================="
+echo " BENCHMARK SUMMARY"
+echo "========================================="
+echo ""
+print_summary "jvm"
+print_summary "graalvm"
diff --git a/benchmark/case.md b/benchmark/case.md
new file mode 100644
index 0000000..45e3a76
--- /dev/null
+++ b/benchmark/case.md
@@ -0,0 +1,243 @@
+# SkyWalking GraalVM Distro Benchmark Case
+
+## Objective
+
+Verify the blog post claims on startup time and memory footprint reduction.
+Additionally, compare CPU and memory under sustained traffic load.
+
+## Images
+
+| Component | Image |
+|-----------|-------|
+| BanyanDB |
`ghcr.io/apache/skywalking-banyandb:e1ba421bd624727760c7a69c84c6fe55878fb526` |
+| OAP (JVM) |
`ghcr.io/apache/skywalking/oap:64a1795d8a582f2216f47bfe572b3ab649733c01-java21`
|
+| OAP (GraalVM) | `ghcr.io/apache/skywalking-graalvm-distro:0.1.0-rc1` |
+| UI | `ghcr.io/apache/skywalking/ui:latest` |
+
+## Benchmark Structure
+
+```
+benchmark/
+├── case.md # this file
+├── run.sh # main entry point
+├── env # image repos and versions
+├── envs-setup/
+│ ├── istio-cluster_oap-banyandb/ # JVM OAP (2 replicas)
+│ │ ├── kind.yaml
+│ │ ├── setup.sh
+│ │ ├── traffic-gen.yaml # 12 RPS
+│ │ └── values.yaml
+│ └── istio-cluster_graalvm-banyandb/ # GraalVM OAP (2 replicas)
+│ ├── kind.yaml
+│ ├── setup.sh
+│ ├── traffic-gen.yaml # 12 RPS
+│ └── values.yaml
+├── cases/
+│ └── graalvm-resource-usage/ # CPU & memory comparison
+│ └── run.sh
+├── reports/ # generated at runtime
+├── docker-compose-jvm.yml # simple local boot test
+├── docker-compose-graalvm.yml # simple local boot test
+└── benchmark.sh # simple local boot/memory test
+```
+
+## Environments
+
+Both environments deploy the same stack on a Kind cluster:
+- **Istio** 1.25.2 with ALS enabled
+- **BanyanDB** standalone storage
+- **Bookinfo** sample app as workload
+- **Traffic generator** at ~12 RPS against productpage
+- **OAP** 2 replicas (cluster mode)
+
+| Environment | OAP Image | Replicas |
+|-------------|-----------|----------|
+| `istio-cluster_oap-banyandb` | JVM
(`ghcr.io/apache/skywalking/oap:64a1795d...`) | 2 |
+| `istio-cluster_graalvm-banyandb` | GraalVM
(`ghcr.io/apache/skywalking-graalvm-distro:0.1.0-rc1`) | 2 |
+
+## Test Cases
+
+### Case 1: Simple Boot Test (Docker Compose)
+
+Local docker compose test — no K8s, no traffic. Measures pure OAP startup and
idle memory.
+
+**Cold boot**: Fresh BanyanDB (tables created on first connect).
+**Warm boot**: Restart OAP only (tables already exist).
+
+```bash
+cd benchmark
+./benchmark.sh 5 60 # 5 iterations, 60s idle wait
+```
+
+Uses `docker-compose-jvm.yml` and `docker-compose-graalvm.yml`.
+
+### Case 2: Resource Usage Comparison (Kind + Istio)
+
+Collects CPU (millicores) and memory (MiB) from OAP pods via `kubectl top`
+under sustained 12 RPS traffic. Includes CPM validation.
+
+```bash
+# JVM
+./benchmark/run.sh run istio-cluster_oap-banyandb graalvm-resource-usage
+
+# GraalVM
+./benchmark/run.sh run istio-cluster_graalvm-banyandb graalvm-resource-usage
+```
+
+**Configuration** (via env vars):
+- `SAMPLE_COUNT=30` — number of samples (default)
+- `SAMPLE_INTERVAL=10` — seconds between samples (default)
+- `WARMUP_SECONDS=60` — warmup before sampling (default)
+
+**Output**: `resource-usage.csv`, `resource-analysis.txt`, `environment.txt`,
`metrics-round-*.yaml`, `metrics-final.yaml`
+
+**Service metrics** (collected every 30s via swctl, all discovered services):
+- `service_cpm` — calls per minute
+- `service_resp_time` — response time (ms)
+- `service_sla` — successful rate (basis points, 10000 = 100%)
+- `service_apdex` — application performance index
+- `service_percentile` — response time percentiles
+
+**CPM Validation**: After sampling, checks that the entry service
+(`productpage.default`) CPM is close to RPS × 60 = 720 (±30% tolerance).
+A mismatch may indicate dropped requests or processing delays under load.
+
+## How to Run
+
+### Prerequisites
+- Docker (4+ GB memory recommended)
+- kind >= 0.30.0
+- kubectl (within ±1 minor of K8s 1.34)
+- Helm >= 3.12.0
+- istioctl 1.25.2 (auto-downloaded if missing)
+- swctl (for metrics collection)
+
+### Full benchmark (recommended)
+
+Run both environments with the resource-usage case:
+
+```bash
+# JVM OAP
+./benchmark/run.sh run istio-cluster_oap-banyandb graalvm-resource-usage
+
+# GraalVM OAP (run after JVM completes — shares Kind cluster name)
+./benchmark/run.sh run istio-cluster_graalvm-banyandb graalvm-resource-usage
+```
+
+Each run creates a Kind cluster, deploys the full stack, runs the benchmark,
+and tears down automatically.
+
+### Quick local boot test
+
+```bash
+cd benchmark
+./benchmark.sh 5 60
+```
+
+No K8s required — just Docker.
+
+## Boot Test Results (Case 1)
+
+Tested on 2026-03-14 with 3 iterations, 30s idle wait.
+
+### Test Environment
+
+| Item | Value |
+|------|-------|
+| Host | macOS 26.3.1, Apple M3 Max, 128 GB RAM, arm64 |
+| Docker | Docker Desktop 28.4.0, 10 CPUs / 62.7 GB allocated |
+| BanyanDB |
`ghcr.io/apache/skywalking-banyandb:e1ba421bd624727760c7a69c84c6fe55878fb526` |
+| OAP (JVM) |
`ghcr.io/apache/skywalking/oap:64a1795d8a582f2216f47bfe572b3ab649733c01-java21`
|
+| OAP (GraalVM) | `ghcr.io/apache/skywalking-graalvm-distro:0.1.0-rc1` |
+
+Boot time is measured from OAP's first application log timestamp to the
+`listening on 11800` log line (gRPC server ready).
+
+### Startup Time (ms)
+
+| Run | JVM Cold | JVM Warm | GraalVM Cold | GraalVM Warm |
+|-----|----------|----------|--------------|--------------|
+| 1 | 635 | 634 | 5 | 6 |
+| 2 | 709 | 630 | 5 | 4 |
+| 3 | 630 | 629 | 5 | 5 |
+| **Median** | **635** | **630** | **5** | **5** |
+
+### Idle Memory (RSS, 5 samples per run at 10s intervals)
+
+| Variant | Cold Boot Range | Warm Boot Range |
+|---------|----------------|-----------------|
+| JVM | 1.06 – 1.35 GiB | 1.22 – 1.52 GiB |
+| GraalVM | 41.0 – 41.6 MiB | 41.0 – 42.0 MiB |
+
+### Summary
+
+| Metric | JVM OAP | GraalVM OAP | Delta |
+|--------|---------|-------------|-------|
+| Cold boot startup (median) | 635 ms | 5 ms | ~127x faster |
+| Warm boot startup (median) | 630 ms | 5 ms | ~126x faster |
+| Idle RSS | ~1.2 GiB | ~41 MiB | ~97% reduction |
+
+## Resource Usage Under Load Results (Case 2)
+
+Tested on 2026-03-14 on Kind + Istio 1.25.2 + Bookinfo at ~12 RPS.
+30 samples at 10s intervals after 60s warmup. 2 OAP replicas (cluster mode).
+
+### Test Environment
+
+| Item | Value |
+|------|-------|
+| Host | macOS 26.3.1, Apple M3 Max, 128 GB RAM, arm64 |
+| Docker | Docker Desktop 28.4.0, 10 CPUs / 62.7 GB allocated |
+| K8s | Kind v0.31.0, Kubernetes v1.34.3 |
+| Istio | 1.25.2 with ALS (k8s-mesh analyzer) |
+| Workload | Bookinfo sample app, ~12 RPS via traffic generator |
+| Storage | BanyanDB standalone |
+| OAP replicas | 2 (cluster mode) |
+
+### Per-Pod Summary
+
+**JVM OAP:**
+
+| Pod | CPU min | CPU max | CPU avg | CPU median | Mem min | Mem max | Mem avg
| Mem median |
+|-----|---------|---------|---------|------------|---------|---------|---------|------------|
+| oap-8w4zr | 98m | 175m | 119m | 117m | 2088 MiB | 2118 MiB | 2106 MiB | 2102
MiB |
+| oap-bvbrk | 80m | 108m | 95m | 95m | 2053 MiB | 2068 MiB | 2059 MiB | 2056
MiB |
+
+**GraalVM OAP:**
+
+| Pod | CPU min | CPU max | CPU avg | CPU median | Mem min | Mem max | Mem avg
| Mem median |
+|-----|---------|---------|---------|------------|---------|---------|---------|------------|
+| oap-4v2k2 | 59m | 76m | 65m | 66m | 610 MiB | 676 MiB | 644 MiB | 648 MiB |
+| oap-f78sv | 65m | 75m | 70m | 70m | 556 MiB | 643 MiB | 604 MiB | 597 MiB |
+
+### Aggregate Summary
+
+| Metric | JVM OAP | GraalVM OAP | Delta |
+|--------|---------|-------------|-------|
+| CPU median (millicores) | 101 | 68 | **-33%** |
+| CPU avg (millicores) | 107 | 67 | **-37%** |
+| Memory median (MiB) | 2068 | 629 | **-70%** |
+| Memory avg (MiB) | 2082 | 624 | **-70%** |
+
+### CPM Validation
+
+| | Entry Service | CPM | Status |
+|---|---|---|---|
+| JVM | productpage.default | 365 | parity |
+| GraalVM | productpage.default | 362 | parity |
+
+Both variants report nearly identical CPM for the entry service, confirming
+equivalent traffic processing capability. The value (~362) is lower than
+the raw RPS × 60 = 720 because the mesh-layer CPM counts differ from
+HTTP-level request counts.
+
+### Service Metrics Collected
+
+Metrics collected every 30s via swctl for all discovered services:
+- `service_cpm` — calls per minute
+- `service_resp_time` — response time (ms)
+- `service_sla` — successful rate (10000 = 100%)
+- `service_apdex` — application performance index
+- `service_percentile` — response time percentiles
+
+Final per-service snapshots are in `metrics-final.yaml`.
diff --git a/benchmark/cases/graalvm-resource-usage/run.sh
b/benchmark/cases/graalvm-resource-usage/run.sh
new file mode 100755
index 0000000..a850700
--- /dev/null
+++ b/benchmark/cases/graalvm-resource-usage/run.sh
@@ -0,0 +1,401 @@
+#!/usr/bin/env bash
+# Benchmark case: Resource usage comparison (CPU & memory).
+#
+# Collects CPU and memory usage from OAP pods at regular intervals
+# under sustained traffic. Designed to compare JVM vs GraalVM OAP.
+#
+# Usage:
+# ./run.sh <env-context-file>
+
+set -euo pipefail
+
+SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
+
+if [ $# -lt 1 ] || [ ! -f "$1" ]; then
+ echo "Usage: $0 <env-context-file>"
+ exit 1
+fi
+
+source "$1"
+
+# Configurable via env vars
+SAMPLE_COUNT="${SAMPLE_COUNT:-30}"
+SAMPLE_INTERVAL="${SAMPLE_INTERVAL:-10}"
+WARMUP_SECONDS="${WARMUP_SECONDS:-60}"
+
+log() { echo "[$(date +%H:%M:%S)] $*"; }
+
+cleanup_pids() {
+ for pid in "${BG_PIDS[@]:-}"; do
+ kill "$pid" 2>/dev/null || true
+ done
+}
+trap cleanup_pids EXIT
+BG_PIDS=()
+
+log "=== Resource Usage Benchmark ==="
+log "Environment: $ENV_NAME"
+log "OAP variant: ${OAP_VARIANT:-unknown}"
+log "OAP: ${OAP_HOST}:${OAP_PORT}"
+log "Namespace: $NAMESPACE"
+log "Config: $SAMPLE_COUNT samples, ${SAMPLE_INTERVAL}s apart,
${WARMUP_SECONDS}s warmup"
+log "Report dir: $REPORT_DIR"
+
+#############################################################################
+# Install metrics-server in Kind (for kubectl top)
+#############################################################################
+log "--- Ensuring metrics-server is available ---"
+
+if ! kubectl top pods -n "$NAMESPACE" &>/dev/null 2>&1; then
+ log "Installing metrics-server..."
+ kubectl apply -f
https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
2>/dev/null || true
+ # Patch for Kind (no TLS verification needed for kubelet)
+ kubectl -n kube-system patch deployment metrics-server \
+ --type=json \
+
-p='[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]'
2>/dev/null || true
+ log "Waiting for metrics-server to be ready..."
+ kubectl -n kube-system wait --for=condition=ready pod -l
k8s-app=metrics-server --timeout=120s 2>/dev/null || true
+ # Give metrics-server time to collect initial data
+ log "Waiting 60s for metrics-server to collect data..."
+ sleep 60
+fi
+
+#############################################################################
+# Metrics monitor (background) — queries all services, multiple metrics
+#############################################################################
+log "--- Starting metrics monitor (every 30s, all services) ---"
+
+OAP_BASE_URL="http://${OAP_HOST}:${OAP_PORT}/graphql"
+
+# Metrics to collect per service
+SWCTL_METRICS=(service_cpm service_resp_time service_sla service_apdex
service_percentile)
+
+if command -v swctl &>/dev/null; then
+ metrics_monitor() {
+ local round=0
+ while true; do
+ round=$((round + 1))
+ local out="$REPORT_DIR/metrics-round-${round}.yaml"
+ {
+ echo "--- round: $round time: $(date -u +%Y-%m-%dT%H:%M:%SZ)
---"
+ echo ""
+
+ # Discover all services
+ echo "# services"
+ local svc_yaml
+ svc_yaml=$(swctl --display yaml --base-url="$OAP_BASE_URL"
service ls 2>/dev/null || echo "ERROR")
+ echo "$svc_yaml"
+ echo ""
+
+ # Extract all service names
+ local svc_names
+ svc_names=$(echo "$svc_yaml" | grep ' name:' | sed 's/.*name:
//')
+
+ # Query each metric for each service
+ while IFS= read -r svc; do
+ [ -z "$svc" ] && continue
+ for metric in "${SWCTL_METRICS[@]}"; do
+ echo "# ${metric} (${svc})"
+ swctl --display yaml --base-url="$OAP_BASE_URL" \
+ metrics exec --expression="$metric"
--service-name="$svc" 2>/dev/null || echo "ERROR"
+ echo ""
+ done
+ done <<< "$svc_names"
+ } > "$out" 2>&1
+ sleep 30
+ done
+ }
+ metrics_monitor &
+ BG_PIDS+=($!)
+else
+ log "WARNING: swctl not found, skipping service metrics collection."
+fi
+
+#############################################################################
+# Warmup
+#############################################################################
+log "--- Warming up for ${WARMUP_SECONDS}s ---"
+sleep "$WARMUP_SECONDS"
+
+#############################################################################
+# Resource usage collection
+#############################################################################
+log "--- Collecting $SAMPLE_COUNT resource samples (${SAMPLE_INTERVAL}s apart)
---"
+
+OAP_PODS=($(kubectl -n "$NAMESPACE" get pods -l "$OAP_SELECTOR" -o
jsonpath='{.items[*].metadata.name}'))
+log "OAP pods: ${OAP_PODS[*]}"
+
+RESOURCE_FILE="$REPORT_DIR/resource-usage.csv"
+echo "timestamp,pod,cpu_millicores,memory_mib" > "$RESOURCE_FILE"
+
+for i in $(seq 1 "$SAMPLE_COUNT"); do
+ TS=$(date -u +%Y-%m-%dT%H:%M:%SZ)
+
+ # kubectl top pods
+ TOP_OUTPUT=$(kubectl -n "$NAMESPACE" top pods -l "$OAP_SELECTOR"
--no-headers 2>/dev/null || true)
+
+ if [ -n "$TOP_OUTPUT" ]; then
+ while IFS= read -r line; do
+ pod=$(echo "$line" | awk '{print $1}')
+ cpu_raw=$(echo "$line" | awk '{print $2}')
+ mem_raw=$(echo "$line" | awk '{print $3}')
+
+ # Parse CPU: "123m" → 123, "1" → 1000
+ if echo "$cpu_raw" | grep -q 'm$'; then
+ cpu_m=$(echo "$cpu_raw" | sed 's/m$//')
+ else
+ cpu_m=$((cpu_raw * 1000))
+ fi
+
+ # Parse memory: "123Mi" → 123, "1Gi" → 1024
+ if echo "$mem_raw" | grep -q 'Gi$'; then
+ mem_mi=$(echo "$mem_raw" | sed 's/Gi$//' | awk '{printf "%d",
$1 * 1024}')
+ elif echo "$mem_raw" | grep -q 'Mi$'; then
+ mem_mi=$(echo "$mem_raw" | sed 's/Mi$//')
+ else
+ mem_mi="$mem_raw"
+ fi
+
+ echo "$TS,$pod,$cpu_m,$mem_mi" >> "$RESOURCE_FILE"
+ done <<< "$TOP_OUTPUT"
+ log " sample $i/$SAMPLE_COUNT: collected"
+ else
+ log " sample $i/$SAMPLE_COUNT: kubectl top failed (metrics may not be
ready)"
+ fi
+
+ if [ "$i" -lt "$SAMPLE_COUNT" ]; then
+ sleep "$SAMPLE_INTERVAL"
+ fi
+done
+
+#############################################################################
+# CPM validation — entry service CPM should be close to RPS×60 (=1200)
+#############################################################################
+log "--- Validating entry service CPM ---"
+
+EXPECTED_CPM=720
+CPM_TOLERANCE=0.3 # 30% tolerance
+ENTRY_SERVICE="productpage.default"
+
+if command -v swctl &>/dev/null; then
+ # Find the entry service (productpage) — the one receiving external traffic
+ ENTRY_SVC=""
+ ALL_SVCS=$(swctl --display yaml --base-url="$OAP_BASE_URL" service ls
2>/dev/null \
+ | grep ' name:' | sed 's/.*name: //' || echo "")
+ while IFS= read -r svc; do
+ if echo "$svc" | grep -q "^${ENTRY_SERVICE}"; then
+ ENTRY_SVC="$svc"
+ break
+ fi
+ done <<< "$ALL_SVCS"
+
+ if [ -z "$ENTRY_SVC" ]; then
+ log " Entry service '$ENTRY_SERVICE' not found. Available: $(echo
"$ALL_SVCS" | tr '\n' ', ')"
+ ACTUAL_CPM=0
+ CPM_STATUS="SKIPPED"
+ else
+ CPM_OUTPUT=$(swctl --display yaml --base-url="$OAP_BASE_URL" metrics
exec \
+ --expression=service_cpm --service-name="$ENTRY_SVC" 2>/dev/null
|| echo "")
+
+ # Extract all non-null CPM values (minute-level time series).
+ # Skip the last 2 values — the most recent minutes may be
+ # incomplete (metrics not yet flushed/persisted by OAP).
+ # Average the remaining stable values for validation.
+ CPM_VALUES=$(echo "$CPM_OUTPUT" | grep 'value:' | grep -v 'null' \
+ | grep -oE '[0-9]+' || true)
+ CPM_COUNT=$(echo "$CPM_VALUES" | grep -c . || echo "0")
+
+ if [ "$CPM_COUNT" -gt 2 ]; then
+ # Drop last 2 (potentially incomplete), average the rest
+ STABLE_VALUES=$(echo "$CPM_VALUES" | head -n $((CPM_COUNT - 2)))
+ ACTUAL_CPM=$(echo "$STABLE_VALUES" | awk '{s+=$1; n++} END {printf
"%d", s/n}')
+ CPM_MIN=$(echo "$STABLE_VALUES" | sort -n | head -1)
+ CPM_MAX=$(echo "$STABLE_VALUES" | sort -n | tail -1)
+ STABLE_COUNT=$(echo "$STABLE_VALUES" | wc -l | tr -d ' ')
+
+ CPM_LOW=$(awk "BEGIN {printf \"%d\", $EXPECTED_CPM * (1 -
$CPM_TOLERANCE)}")
+ CPM_HIGH=$(awk "BEGIN {printf \"%d\", $EXPECTED_CPM * (1 +
$CPM_TOLERANCE)}")
+ if [ "$ACTUAL_CPM" -ge "$CPM_LOW" ] && [ "$ACTUAL_CPM" -le
"$CPM_HIGH" ]; then
+ log " CPM PASSED: $ENTRY_SVC avg=$ACTUAL_CPM min=$CPM_MIN
max=$CPM_MAX (${STABLE_COUNT} stable minutes, expected ~$EXPECTED_CPM, range
$CPM_LOW-$CPM_HIGH)"
+ CPM_STATUS="PASS"
+ else
+ log " CPM WARNING: $ENTRY_SVC avg=$ACTUAL_CPM min=$CPM_MIN
max=$CPM_MAX (${STABLE_COUNT} stable minutes, expected ~$EXPECTED_CPM, range
$CPM_LOW-$CPM_HIGH)"
+ CPM_STATUS="WARNING"
+ fi
+ elif [ "$CPM_COUNT" -gt 0 ]; then
+ ACTUAL_CPM=$(echo "$CPM_VALUES" | awk '{s+=$1; n++} END {printf
"%d", s/n}')
+ log " CPM WARNING: only $CPM_COUNT data points for $ENTRY_SVC,
avg=$ACTUAL_CPM (too few for reliable validation)"
+ CPM_STATUS="WARNING"
+ else
+ log " CPM SKIPPED: no CPM data for $ENTRY_SVC"
+ ACTUAL_CPM=0
+ CPM_STATUS="SKIPPED"
+ fi
+ fi
+
+ # Also collect final snapshot of all service metrics for the report
+ log "--- Collecting final metrics snapshot (all services) ---"
+ METRICS_SNAPSHOT="$REPORT_DIR/metrics-final.yaml"
+ {
+ echo "--- Final metrics snapshot: $(date -u +%Y-%m-%dT%H:%M:%SZ) ---"
+ echo ""
+ while IFS= read -r svc; do
+ [ -z "$svc" ] && continue
+ for metric in "${SWCTL_METRICS[@]}"; do
+ echo "# ${metric} (${svc})"
+ swctl --display yaml --base-url="$OAP_BASE_URL" \
+ metrics exec --expression="$metric" --service-name="$svc"
2>/dev/null || echo "ERROR"
+ echo ""
+ done
+ done <<< "$ALL_SVCS"
+ } > "$METRICS_SNAPSHOT" 2>&1
+else
+ log " CPM validation SKIPPED: swctl not available"
+ ACTUAL_CPM=0
+ CPM_STATUS="SKIPPED"
+fi
+
+#############################################################################
+# Analysis
+#############################################################################
+log "--- Generating resource usage report ---"
+
+ANALYSIS_FILE="$REPORT_DIR/resource-analysis.txt"
+{
+ echo "================================================================"
+ echo " OAP Resource Usage Report"
+ echo " Generated: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
+ echo " OAP variant: ${OAP_VARIANT:-unknown}"
+ echo " Samples: $SAMPLE_COUNT x ${SAMPLE_INTERVAL}s apart"
+ echo " Warmup: ${WARMUP_SECONDS}s"
+ echo " Traffic rate: ~12 RPS"
+ echo " OAP pods: ${OAP_PODS[*]}"
+ echo "================================================================"
+ echo ""
+
+ DATA_LINES=$(tail -n +2 "$RESOURCE_FILE" | wc -l | tr -d ' ')
+ if [ "$DATA_LINES" -eq 0 ]; then
+ echo "No resource data collected. Ensure metrics-server is running."
+ else
+ echo "--- Per-Pod Summary ---"
+ echo ""
+ for pod in "${OAP_PODS[@]}"; do
+ echo "Pod: $pod"
+ pod_data=$(grep ",$pod," "$RESOURCE_FILE" || true)
+ if [ -z "$pod_data" ]; then
+ echo " No data collected."
+ echo ""
+ continue
+ fi
+
+ cpu_values=$(echo "$pod_data" | awk -F',' '{print $3}')
+ mem_values=$(echo "$pod_data" | awk -F',' '{print $4}')
+
+ cpu_min=$(echo "$cpu_values" | sort -n | head -1)
+ cpu_max=$(echo "$cpu_values" | sort -n | tail -1)
+ cpu_avg=$(echo "$cpu_values" | awk '{s+=$1; n++} END {printf "%d",
s/n}')
+ cpu_median=$(echo "$cpu_values" | sort -n | awk '{a[NR]=$1}
END{print a[int((NR+1)/2)]}')
+
+ mem_min=$(echo "$mem_values" | sort -n | head -1)
+ mem_max=$(echo "$mem_values" | sort -n | tail -1)
+ mem_avg=$(echo "$mem_values" | awk '{s+=$1; n++} END {printf "%d",
s/n}')
+ mem_median=$(echo "$mem_values" | sort -n | awk '{a[NR]=$1}
END{print a[int((NR+1)/2)]}')
+
+ printf " CPU (millicores): min=%-6s max=%-6s avg=%-6s
median=%-6s\n" "$cpu_min" "$cpu_max" "$cpu_avg" "$cpu_median"
+ printf " Memory (MiB): min=%-6s max=%-6s avg=%-6s
median=%-6s\n" "$mem_min" "$mem_max" "$mem_avg" "$mem_median"
+ echo ""
+ done
+
+ echo "--- Aggregate (all OAP pods) ---"
+ echo ""
+ all_cpu=$(tail -n +2 "$RESOURCE_FILE" | awk -F',' '{print $3}')
+ all_mem=$(tail -n +2 "$RESOURCE_FILE" | awk -F',' '{print $4}')
+
+ agg_cpu_avg=$(echo "$all_cpu" | awk '{s+=$1; n++} END {printf "%d",
s/n}')
+ agg_cpu_median=$(echo "$all_cpu" | sort -n | awk '{a[NR]=$1} END{print
a[int((NR+1)/2)]}')
+ agg_mem_avg=$(echo "$all_mem" | awk '{s+=$1; n++} END {printf "%d",
s/n}')
+ agg_mem_median=$(echo "$all_mem" | sort -n | awk '{a[NR]=$1} END{print
a[int((NR+1)/2)]}')
+
+ printf " CPU (millicores): avg=%-6s median=%-6s\n" "$agg_cpu_avg"
"$agg_cpu_median"
+ printf " Memory (MiB): avg=%-6s median=%-6s\n" "$agg_mem_avg"
"$agg_mem_median"
+ echo ""
+ fi
+
+ echo "--- CPM Validation (entry service: ${ENTRY_SERVICE}) ---"
+ echo ""
+ echo " Traffic rate: ~12 RPS"
+ echo " Expected CPM: ~$EXPECTED_CPM (12 × 60)"
+ echo " Entry service: ${ENTRY_SVC:-not found}"
+ echo " Stable minutes: ${STABLE_COUNT:-N/A} (last 2 dropped as
potentially incomplete)"
+ echo " CPM avg: ${ACTUAL_CPM:-N/A}"
+ echo " CPM range: ${CPM_MIN:-N/A} – ${CPM_MAX:-N/A}"
+ echo " Status: ${CPM_STATUS:-SKIPPED}"
+ echo ""
+ if [ "${CPM_STATUS:-SKIPPED}" = "WARNING" ]; then
+ echo " NOTE: CPM mismatch may indicate dropped requests or processing
delays."
+ fi
+ echo ""
+ echo "--- Metrics Collected ---"
+ echo ""
+ echo " Monitor frequency: every 30s (background)"
+ echo " Metrics per service: ${SWCTL_METRICS[*]}"
+ echo " Final snapshot: metrics-final.yaml"
+ echo ""
+} > "$ANALYSIS_FILE"
+
+#############################################################################
+# Environment summary
+#############################################################################
+ENV_REPORT="$REPORT_DIR/environment.txt"
+{
+ echo "================================================================"
+ echo " Benchmark Report: Resource Usage"
+ echo " Environment: $ENV_NAME"
+ echo " OAP variant: ${OAP_VARIANT:-unknown}"
+ echo " Generated: $(date -u +%Y-%m-%dT%H:%M:%SZ)"
+ echo "================================================================"
+ echo ""
+ echo "--- Host ---"
+ echo " OS: $(uname -s) $(uname -r)"
+ echo " Arch: $(uname -m)"
+ echo ""
+ echo "--- Docker ---"
+ echo " Server: $DOCKER_SERVER_VERSION"
+ echo " OS: $DOCKER_OS"
+ echo " Driver: $DOCKER_STORAGE_DRIVER"
+ echo " CPUs: $DOCKER_CPUS"
+ echo " Memory: ${DOCKER_MEM_GB} GB"
+ echo ""
+ echo "--- Benchmark Config ---"
+ echo " OAP variant: ${OAP_VARIANT:-unknown}"
+ echo " OAP replicas: ${#OAP_PODS[@]}"
+ echo " Storage: BanyanDB (standalone)"
+ echo " Istio: ${ISTIO_VERSION:-N/A}"
+ echo " ALS analyzer: ${ALS_ANALYZER:-N/A}"
+ echo " Traffic rate: ~12 RPS"
+ echo " Samples: $SAMPLE_COUNT x ${SAMPLE_INTERVAL}s"
+ echo " Warmup: ${WARMUP_SECONDS}s"
+ echo ""
+ echo "--- Pod Status (at completion) ---"
+ kubectl -n "$NAMESPACE" get pods -o wide 2>/dev/null || echo " (could not
query)"
+ echo ""
+ echo "--- K8s Node Resources ---"
+ if [ -f "$REPORT_DIR/node-resources.txt" ]; then
+ cat "$REPORT_DIR/node-resources.txt"
+ else
+ echo " (not captured)"
+ fi
+ echo ""
+} > "$ENV_REPORT"
+
+cleanup_pids
+BG_PIDS=()
+
+log "=== Resource usage benchmark complete ==="
+log "Reports in: $REPORT_DIR"
+log " resource-usage.csv - Raw CSV (timestamp, pod, cpu_m, mem_mib)"
+log " resource-analysis.txt - Summary statistics"
+log " environment.txt - Environment details"
+log " metrics-round-*.yaml - Periodic swctl metrics (all services, every
30s)"
+log " metrics-final.yaml - Final snapshot (all services × all metrics)"
+log "Done. Environment is still running."
diff --git a/benchmark/docker-compose-graalvm.yml
b/benchmark/docker-compose-graalvm.yml
new file mode 100644
index 0000000..dac7365
--- /dev/null
+++ b/benchmark/docker-compose-graalvm.yml
@@ -0,0 +1,36 @@
+version: '3.8'
+
+services:
+ banyandb:
+ image:
ghcr.io/apache/skywalking-banyandb:e1ba421bd624727760c7a69c84c6fe55878fb526
+ container_name: banyandb
+ restart: always
+ ports:
+ - "17912:17912"
+ - "17913:17913"
+ command: standalone --stream-root-path /tmp/stream-data
--measure-root-path /tmp/measure-data --measure-metadata-cache-wait-duration 1m
--stream-metadata-cache-wait-duration 1m
+ healthcheck:
+ test: ["CMD", "sh", "-c", "nc -nz 127.0.0.1 17912"]
+ interval: 5s
+ timeout: 10s
+ retries: 120
+
+ oap:
+ image: ghcr.io/apache/skywalking-graalvm-distro:0.1.0-rc1
+ container_name: oap
+ depends_on:
+ banyandb:
+ condition: service_healthy
+ restart: always
+ ports:
+ - "11800:11800"
+ - "12800:12800"
+ environment:
+ SW_STORAGE: banyandb
+ SW_STORAGE_BANYANDB_TARGETS: banyandb:17912
+ SW_HEALTH_CHECKER: default
+ healthcheck:
+ test: ["CMD-SHELL", "nc -nz 127.0.0.1 11800 || exit 1"]
+ interval: 5s
+ timeout: 10s
+ retries: 120
diff --git a/benchmark/docker-compose-jvm.yml b/benchmark/docker-compose-jvm.yml
new file mode 100644
index 0000000..4d4e6e4
--- /dev/null
+++ b/benchmark/docker-compose-jvm.yml
@@ -0,0 +1,36 @@
+version: '3.8'
+
+services:
+ banyandb:
+ image:
ghcr.io/apache/skywalking-banyandb:e1ba421bd624727760c7a69c84c6fe55878fb526
+ container_name: banyandb
+ restart: always
+ ports:
+ - "17912:17912"
+ - "17913:17913"
+ command: standalone --stream-root-path /tmp/stream-data
--measure-root-path /tmp/measure-data --measure-metadata-cache-wait-duration 1m
--stream-metadata-cache-wait-duration 1m
+ healthcheck:
+ test: ["CMD", "sh", "-c", "nc -nz 127.0.0.1 17912"]
+ interval: 5s
+ timeout: 10s
+ retries: 120
+
+ oap:
+ image:
ghcr.io/apache/skywalking/oap:64a1795d8a582f2216f47bfe572b3ab649733c01-java21
+ container_name: oap
+ depends_on:
+ banyandb:
+ condition: service_healthy
+ restart: always
+ ports:
+ - "11800:11800"
+ - "12800:12800"
+ environment:
+ SW_STORAGE: banyandb
+ SW_STORAGE_BANYANDB_TARGETS: banyandb:17912
+ SW_HEALTH_CHECKER: default
+ healthcheck:
+ test: ["CMD-SHELL", "nc -nz 127.0.0.1 11800 || exit 1"]
+ interval: 5s
+ timeout: 10s
+ retries: 120
diff --git a/benchmark/env b/benchmark/env
new file mode 100644
index 0000000..b9bff60
--- /dev/null
+++ b/benchmark/env
@@ -0,0 +1,38 @@
+# Benchmark environment configuration.
+# All environment setup scripts source this file for image repos and versions.
+
+##############################################################################
+# SkyWalking OAP (JVM)
+##############################################################################
+SW_OAP_IMAGE_REPO="ghcr.io/apache/skywalking/oap"
+SW_OAP_IMAGE_TAG="64a1795d8a582f2216f47bfe572b3ab649733c01-java21"
+
+##############################################################################
+# SkyWalking OAP (GraalVM Native)
+##############################################################################
+SW_GRAALVM_IMAGE_REPO="ghcr.io/apache/skywalking-graalvm-distro"
+SW_GRAALVM_IMAGE_TAG="0.1.0-rc1"
+
+##############################################################################
+# SkyWalking UI
+##############################################################################
+SW_UI_IMAGE_REPO="ghcr.io/apache/skywalking/ui"
+SW_UI_IMAGE_TAG="latest"
+
+##############################################################################
+# BanyanDB
+##############################################################################
+SW_BANYANDB_IMAGE_REPO="ghcr.io/apache/skywalking-banyandb"
+SW_BANYANDB_IMAGE_TAG="e1ba421bd624727760c7a69c84c6fe55878fb526"
+
+##############################################################################
+# SkyWalking Helm Chart
+##############################################################################
+SW_HELM_CHART="oci://ghcr.io/apache/skywalking-helm/skywalking-helm"
+SW_KUBERNETES_COMMIT_SHA="6fe5e6f0d3b7686c6be0457733e825ee68cb9b35"
+
+##############################################################################
+# Istio
+##############################################################################
+ISTIO_VERSION="1.25.2"
+ALS_ANALYZER="k8s-mesh"
diff --git a/benchmark/envs-setup/istio-cluster_graalvm-banyandb/kind.yaml
b/benchmark/envs-setup/istio-cluster_graalvm-banyandb/kind.yaml
new file mode 100644
index 0000000..9776927
--- /dev/null
+++ b/benchmark/envs-setup/istio-cluster_graalvm-banyandb/kind.yaml
@@ -0,0 +1,5 @@
+kind: Cluster
+apiVersion: kind.x-k8s.io/v1alpha4
+nodes:
+ - role: control-plane
+ image:
kindest/node:v1.34.3@sha256:08497ee19eace7b4b5348db5c6a1591d7752b164530a36f855cb0f2bdcbadd48
diff --git a/benchmark/envs-setup/istio-cluster_graalvm-banyandb/setup.sh
b/benchmark/envs-setup/istio-cluster_graalvm-banyandb/setup.sh
new file mode 100755
index 0000000..37caa10
--- /dev/null
+++ b/benchmark/envs-setup/istio-cluster_graalvm-banyandb/setup.sh
@@ -0,0 +1,388 @@
+#!/usr/bin/env bash
+# Environment setup: OAP (GraalVM native) + BanyanDB + Istio ALS on Kind.
+#
+# Same as the JVM variant but uses the GraalVM native image distro.
+# Runs a single OAP replica (standalone mode).
+#
+# The context file is written to: $REPORT_DIR/env-context.sh
+
+set -euo pipefail
+
+SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
+
+NAMESPACE="istio-system"
+CLUSTER_NAME="benchmark-cluster"
+
+# Load benchmark environment configuration (image repos, versions)
+BENCHMARKS_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
+source "$BENCHMARKS_DIR/env"
+
+OAP_IMAGE_REPO="$SW_GRAALVM_IMAGE_REPO"
+OAP_IMAGE_TAG="$SW_GRAALVM_IMAGE_TAG"
+
+# Kind / K8s compatibility
+KIND_MIN_VERSION="0.25.0"
+HELM_MIN_VERSION="3.12.0"
+K8S_NODE_MINOR="1.34"
+
+log() { echo "[$(date +%H:%M:%S)] $*"; }
+
+version_gte() {
+ local IFS=.
+ local i a=($1) b=($2)
+ for ((i = 0; i < ${#b[@]}; i++)); do
+ local va=${a[i]:-0}
+ local vb=${b[i]:-0}
+ if ((va > vb)); then return 0; fi
+ if ((va < vb)); then return 1; fi
+ done
+ return 0
+}
+
+min_kind_for_k8s() {
+ case "$1" in
+ 1.28) echo "0.25.0" ;;
+ 1.29) echo "0.25.0" ;;
+ 1.30) echo "0.27.0" ;;
+ 1.31) echo "0.25.0" ;;
+ 1.32) echo "0.26.0" ;;
+ 1.33) echo "0.27.0" ;;
+ 1.34) echo "0.30.0" ;;
+ 1.35) echo "0.31.0" ;;
+ *) echo "unknown" ;;
+ esac
+}
+
+REPORT_DIR="${REPORT_DIR:?ERROR: REPORT_DIR must be set by the caller}"
+mkdir -p "$REPORT_DIR"
+
+#############################################################################
+# Pre-checks
+#############################################################################
+log "=== Pre-checks ==="
+
+for cmd in kind kubectl helm docker; do
+ if ! command -v "$cmd" &>/dev/null; then
+ echo "ERROR: $cmd not found. Please install it first."
+ exit 1
+ fi
+done
+
+KIND_VERSION=$(kind version | grep -oE '[0-9]+\.[0-9]+\.[0-9]+' | head -1)
+log "kind version: $KIND_VERSION (minimum: $KIND_MIN_VERSION)"
+if ! version_gte "$KIND_VERSION" "$KIND_MIN_VERSION"; then
+ echo "ERROR: kind >= $KIND_MIN_VERSION is required, found $KIND_VERSION"
+ exit 1
+fi
+
+REQUIRED_KIND_FOR_NODE=$(min_kind_for_k8s "$K8S_NODE_MINOR")
+if [ "$REQUIRED_KIND_FOR_NODE" = "unknown" ]; then
+ echo "WARNING: No known kind compatibility data for K8s $K8S_NODE_MINOR.
Proceeding anyway."
+elif ! version_gte "$KIND_VERSION" "$REQUIRED_KIND_FOR_NODE"; then
+ echo "ERROR: K8s $K8S_NODE_MINOR node image requires kind >=
$REQUIRED_KIND_FOR_NODE, found $KIND_VERSION"
+ exit 1
+fi
+log "kind $KIND_VERSION is compatible with K8s $K8S_NODE_MINOR node image."
+
+KUBECTL_CLIENT_VERSION=$(kubectl version --client -o json 2>/dev/null \
+ | grep -oE '"gitVersion":\s*"v([0-9]+\.[0-9]+)' | head -1 | grep -oE
'[0-9]+\.[0-9]+')
+if [ -n "$KUBECTL_CLIENT_VERSION" ]; then
+ KUBECTL_MINOR=$(echo "$KUBECTL_CLIENT_VERSION" | cut -d. -f2)
+ NODE_MINOR=$(echo "$K8S_NODE_MINOR" | cut -d. -f2)
+ SKEW=$((KUBECTL_MINOR - NODE_MINOR))
+ if [ "$SKEW" -lt 0 ]; then SKEW=$((-SKEW)); fi
+ log "kubectl client: $KUBECTL_CLIENT_VERSION, K8s node: $K8S_NODE_MINOR
(skew: $SKEW)"
+ if [ "$SKEW" -gt 1 ]; then
+ echo "ERROR: kubectl version $KUBECTL_CLIENT_VERSION is too far from
K8s $K8S_NODE_MINOR (max ±1 minor)."
+ exit 1
+ fi
+else
+ echo "WARNING: Could not determine kubectl client version, skipping skew
check."
+fi
+
+HELM_VERSION=$(helm version --short 2>/dev/null | grep -oE
'[0-9]+\.[0-9]+\.[0-9]+' | head -1)
+log "Helm version: $HELM_VERSION (minimum: $HELM_MIN_VERSION)"
+if ! version_gte "$HELM_VERSION" "$HELM_MIN_VERSION"; then
+ echo "ERROR: Helm >= $HELM_MIN_VERSION is required, found $HELM_VERSION"
+ exit 1
+fi
+
+ISTIOCTL_LOCAL_VERSION=$(istioctl version --remote=false 2>/dev/null | head -1
|| echo "none")
+if [ "$ISTIOCTL_LOCAL_VERSION" != "$ISTIO_VERSION" ]; then
+ log "istioctl version mismatch: have $ISTIOCTL_LOCAL_VERSION, need
$ISTIO_VERSION. Downloading..."
+ ISTIO_DOWNLOAD_DIR="$BENCHMARKS_DIR/.istio"
+ mkdir -p "$ISTIO_DOWNLOAD_DIR"
+ if [ ! -f "$ISTIO_DOWNLOAD_DIR/istio-${ISTIO_VERSION}/bin/istioctl" ]; then
+ (cd "$ISTIO_DOWNLOAD_DIR" && export ISTIO_VERSION && curl -sL
https://istio.io/downloadIstio | sh -)
+ fi
+ if [ ! -f "$ISTIO_DOWNLOAD_DIR/istio-${ISTIO_VERSION}/bin/istioctl" ]; then
+ echo "ERROR: Failed to download istioctl $ISTIO_VERSION"
+ exit 1
+ fi
+ export PATH="$ISTIO_DOWNLOAD_DIR/istio-${ISTIO_VERSION}/bin:$PATH"
+ ISTIOCTL_VERSION=$(istioctl version --remote=false 2>/dev/null | head -1
|| echo "unknown")
+ log "Using downloaded istioctl: $ISTIOCTL_VERSION"
+else
+ ISTIOCTL_VERSION="$ISTIOCTL_LOCAL_VERSION"
+ log "istioctl version: $ISTIOCTL_VERSION"
+fi
+
+log "All version checks passed."
+
+DOCKER_CPUS=$(docker info --format '{{.NCPU}}' 2>/dev/null || echo "unknown")
+DOCKER_MEM_BYTES=$(docker info --format '{{.MemTotal}}' 2>/dev/null || echo
"0")
+if [ "$DOCKER_MEM_BYTES" -gt 0 ] 2>/dev/null; then
+ DOCKER_MEM_GB=$(awk "BEGIN {printf \"%.1f\", $DOCKER_MEM_BYTES /
1073741824}")
+else
+ DOCKER_MEM_GB="unknown"
+fi
+DOCKER_SERVER_VERSION=$(docker info --format '{{.ServerVersion}}' 2>/dev/null
|| echo "unknown")
+DOCKER_OS=$(docker info --format '{{.OperatingSystem}}' 2>/dev/null || echo
"unknown")
+DOCKER_STORAGE_DRIVER=$(docker info --format '{{.Driver}}' 2>/dev/null || echo
"unknown")
+log "Docker: ${DOCKER_CPUS} CPUs, ${DOCKER_MEM_GB} GB memory (server:
${DOCKER_SERVER_VERSION}, ${DOCKER_OS})"
+
+if [ "$DOCKER_MEM_BYTES" -gt 0 ] 2>/dev/null; then
+ MIN_MEM_BYTES=$((4 * 1073741824))
+ if [ "$DOCKER_MEM_BYTES" -lt "$MIN_MEM_BYTES" ]; then
+ echo "WARNING: Docker has only ${DOCKER_MEM_GB} GB memory. Recommend
>= 4 GB for this benchmark."
+ fi
+fi
+
+#############################################################################
+# Boot Kind cluster
+#############################################################################
+log "=== Booting Kind cluster ==="
+
+if kind get clusters 2>/dev/null | grep -q "^${CLUSTER_NAME}$"; then
+ log "Kind cluster '$CLUSTER_NAME' already exists, reusing."
+else
+ log "Creating Kind cluster '$CLUSTER_NAME'..."
+ kind create cluster --name "$CLUSTER_NAME" --config "$SCRIPT_DIR/kind.yaml"
+fi
+
+BANYANDB_IMAGE="${SW_BANYANDB_IMAGE_REPO}:${SW_BANYANDB_IMAGE_TAG}"
+OAP_IMAGE="${OAP_IMAGE_REPO}:${OAP_IMAGE_TAG}"
+UI_IMAGE="${SW_UI_IMAGE_REPO}:${SW_UI_IMAGE_TAG}"
+IMAGES=(
+ "$OAP_IMAGE"
+ "$UI_IMAGE"
+ "$BANYANDB_IMAGE"
+ "docker.io/istio/pilot:${ISTIO_VERSION}"
+ "docker.io/istio/proxyv2:${ISTIO_VERSION}"
+ "docker.io/istio/examples-bookinfo-productpage-v1:1.20.2"
+ "docker.io/istio/examples-bookinfo-details-v1:1.20.2"
+ "docker.io/istio/examples-bookinfo-reviews-v1:1.20.2"
+ "docker.io/istio/examples-bookinfo-reviews-v2:1.20.2"
+ "docker.io/istio/examples-bookinfo-reviews-v3:1.20.2"
+ "docker.io/istio/examples-bookinfo-ratings-v1:1.20.2"
+ "docker.io/istio/examples-bookinfo-ratings-v2:1.20.2"
+ "docker.io/istio/examples-bookinfo-mongodb:1.20.2"
+ "curlimages/curl:latest"
+ "registry.k8s.io/metrics-server/metrics-server:v0.8.1"
+)
+log "Pulling images on host (if not cached)..."
+for img in "${IMAGES[@]}"; do
+ pulled=false
+ for attempt in 1 2 3; do
+ if docker pull "$img" -q 2>/dev/null; then
+ pulled=true
+ break
+ fi
+ [ "$attempt" -lt 3 ] && sleep 5
+ done
+ if [ "$pulled" = false ]; then
+ log " WARNING: failed to pull $img after 3 attempts"
+ fi
+done
+
+log "Loading images into Kind..."
+for img in "${IMAGES[@]}"; do
+ kind load docker-image "$img" --name "$CLUSTER_NAME" 2>/dev/null || log "
WARNING: failed to load $img"
+done
+
+#############################################################################
+# Install Istio
+#############################################################################
+log "=== Installing Istio $ISTIO_VERSION ==="
+
+istioctl install -y --set profile=demo \
+ --set
meshConfig.defaultConfig.envoyAccessLogService.address=skywalking-oap.${NAMESPACE}:11800
\
+ --set meshConfig.enableEnvoyAccessLogService=true
+
+kubectl label namespace default istio-injection=enabled --overwrite
+
+#############################################################################
+# Deploy SkyWalking via Helm (GraalVM: single replica, standalone)
+#############################################################################
+log "=== Deploying SkyWalking (GraalVM OAP x2 + BanyanDB + UI) via Helm ==="
+
+helm -n "$NAMESPACE" upgrade --install skywalking \
+ "$SW_HELM_CHART" \
+ --version "0.0.0-${SW_KUBERNETES_COMMIT_SHA}" \
+ --set fullnameOverride=skywalking \
+ --set oap.replicas=2 \
+ --set oap.image.repository="$OAP_IMAGE_REPO" \
+ --set oap.image.tag="$OAP_IMAGE_TAG" \
+ --set oap.storageType=banyandb \
+ --set oap.env.SW_ENVOY_METRIC_ALS_HTTP_ANALYSIS="$ALS_ANALYZER" \
+ --set oap.env.SW_ENVOY_METRIC_ALS_TCP_ANALYSIS="$ALS_ANALYZER" \
+ --set oap.env.SW_HEALTH_CHECKER=default \
+ --set oap.env.SW_TELEMETRY=prometheus \
+ --set oap.envoy.als.enabled=true \
+ --set ui.image.repository="$SW_UI_IMAGE_REPO" \
+ --set ui.image.tag="$SW_UI_IMAGE_TAG" \
+ --set elasticsearch.enabled=false \
+ --set banyandb.enabled=true \
+ --set banyandb.image.repository="$SW_BANYANDB_IMAGE_REPO" \
+ --set banyandb.image.tag="$SW_BANYANDB_IMAGE_TAG" \
+ --set banyandb.standalone.enabled=true \
+ --timeout 1200s \
+ -f "$SCRIPT_DIR/values.yaml"
+
+log "Waiting for BanyanDB to be ready..."
+kubectl -n "$NAMESPACE" wait --for=condition=ready pod -l
app.kubernetes.io/name=banyandb --timeout=300s
+
+log "Waiting for OAP init job to complete..."
+for i in $(seq 1 60); do
+ if kubectl -n "$NAMESPACE" get jobs -l component=skywalking-job -o
jsonpath='{.items[0].status.succeeded}' 2>/dev/null | grep -q '1'; then
+ log "OAP init job succeeded."
+ break
+ fi
+ if [ "$i" -eq 60 ]; then
+ echo "ERROR: OAP init job did not complete within 300s."
+ kubectl -n "$NAMESPACE" get pods -l component=skywalking-job
2>/dev/null
+ exit 1
+ fi
+ sleep 5
+done
+
+log "Waiting for OAP pods to be ready..."
+kubectl -n "$NAMESPACE" wait --for=condition=ready pod -l
app=skywalking,component=oap --timeout=300s
+
+#############################################################################
+# Capture K8s node resources
+#############################################################################
+log "Capturing node resource info..."
+kubectl get nodes -o json | awk '
+ BEGIN { print "--- K8s Node Resources ---" }
+ /"capacity":/ { cap=1 } /"allocatable":/ { alloc=1 }
+ cap && /"cpu":/ { gsub(/[",]/, ""); printf " capacity.cpu: %s\n",
$2; cap=0 }
+ cap && /"memory":/ { gsub(/[",]/, ""); printf " capacity.memory:
%s\n", $2; cap=0 }
+ cap && /"ephemeral-storage":/ { gsub(/[",]/, ""); printf "
capacity.storage: %s\n", $2; cap=0 }
+ cap && /"pods":/ { gsub(/[",]/, ""); printf " capacity.pods:
%s\n", $2; cap=0 }
+ alloc && /"cpu":/ { gsub(/[",]/, ""); printf " allocatable.cpu:
%s\n", $2; alloc=0 }
+ alloc && /"memory":/ { gsub(/[",]/, ""); printf " allocatable.memory:
%s\n", $2; alloc=0 }
+' > "$REPORT_DIR/node-resources.txt"
+kubectl describe node | sed -n '/Allocated resources/,/Events/p' \
+ >> "$REPORT_DIR/node-resources.txt" 2>/dev/null || true
+
+#############################################################################
+# Deploy Istio Bookinfo sample app
+#############################################################################
+log "=== Deploying Bookinfo sample app ==="
+
+BOOKINFO_BASE="https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/bookinfo"
+
+kubectl apply -f "${BOOKINFO_BASE}/platform/kube/bookinfo.yaml"
+kubectl apply -f "${BOOKINFO_BASE}/networking/bookinfo-gateway.yaml"
+kubectl apply -f "${BOOKINFO_BASE}/platform/kube/bookinfo-ratings-v2.yaml"
+kubectl apply -f "${BOOKINFO_BASE}/platform/kube/bookinfo-db.yaml"
+kubectl apply -f "${BOOKINFO_BASE}/networking/destination-rule-all.yaml"
+kubectl apply -f "${BOOKINFO_BASE}/networking/virtual-service-ratings-db.yaml"
+
+log "Waiting for Bookinfo pods to be ready..."
+kubectl -n default wait --for=condition=ready pod -l app=productpage
--timeout=300s
+kubectl -n default wait --for=condition=ready pod -l app=details --timeout=300s
+kubectl -n default wait --for=condition=ready pod -l app=ratings,version=v1
--timeout=300s
+kubectl -n default wait --for=condition=ready pod -l app=reviews,version=v1
--timeout=300s
+kubectl -n default wait --for=condition=ready pod -l app=reviews,version=v2
--timeout=300s
+
+log "Deploying traffic generator..."
+kubectl apply -f "$SCRIPT_DIR/traffic-gen.yaml"
+kubectl -n default wait --for=condition=ready pod -l app=traffic-gen
--timeout=60s
+
+#############################################################################
+# Cluster health check
+#############################################################################
+log "=== Cluster health check (remote_out_count) ==="
+log "Waiting 30s for traffic to flow..."
+sleep 30
+
+OAP_PODS_CHECK=($(kubectl -n "$NAMESPACE" get pods -l
app=skywalking,component=oap -o jsonpath='{.items[*].metadata.name}'))
+EXPECTED_NODES=${#OAP_PODS_CHECK[@]}
+CLUSTER_HEALTHY=true
+
+CURL_IMAGE="curlimages/curl:latest"
+for pod in "${OAP_PODS_CHECK[@]}"; do
+ log " Checking $pod..."
+ POD_IP=$(kubectl -n "$NAMESPACE" get pod "$pod" -o
jsonpath='{.status.podIP}')
+ METRICS=$(kubectl -n "$NAMESPACE" run "health-check-${pod##*-}" --rm -i
--restart=Never \
+ --image="$CURL_IMAGE" -- curl -s "http://${POD_IP}:1234/metrics"
2>/dev/null) || METRICS=""
+ REMOTE_OUT=$(echo "$METRICS" | grep '^remote_out_count{' || true)
+
+ if [ -z "$REMOTE_OUT" ]; then
+ REMOTE_IN=$(echo "$METRICS" | grep '^remote_in_count{' || true)
+ if [ -n "$REMOTE_IN" ]; then
+ log " $pod: no remote_out_count but has remote_in_count
(receiver-only node)"
+ else
+ log " WARNING: $pod has no remote_out_count or remote_in_count"
+ CLUSTER_HEALTHY=false
+ fi
+ else
+ DEST_COUNT=$(echo "$REMOTE_OUT" | wc -l | tr -d ' ')
+ log " $pod: $DEST_COUNT dest(s)"
+ fi
+done
+
+if [ "$CLUSTER_HEALTHY" = true ]; then
+ log "Cluster health check passed."
+else
+ log "WARNING: Cluster health check has issues. Proceeding anyway."
+fi
+
+#############################################################################
+# Port-forward OAP for local queries
+#############################################################################
+log "Setting up port-forwards..."
+kubectl -n "$NAMESPACE" port-forward svc/skywalking-oap 12800:12800 &
+SETUP_BG_PIDS=($!)
+kubectl -n "$NAMESPACE" port-forward svc/skywalking-ui 8080:80 &
+SETUP_BG_PIDS+=($!)
+sleep 3
+
+log "Environment is up. OAP at localhost:12800, UI at localhost:8080"
+
+#############################################################################
+# Write context file
+#############################################################################
+CONTEXT_FILE="$REPORT_DIR/env-context.sh"
+cat > "$CONTEXT_FILE" <<EOF
+# Auto-generated by envs-setup/istio-cluster_graalvm-banyandb/setup.sh
+export ENV_NAME="istio-cluster_graalvm-banyandb"
+export OAP_VARIANT="graalvm"
+export NAMESPACE="$NAMESPACE"
+export CLUSTER_NAME="$CLUSTER_NAME"
+export OAP_HOST="localhost"
+export OAP_PORT="12800"
+export OAP_GRPC_PORT="11800"
+export UI_HOST="localhost"
+export UI_PORT="8080"
+export OAP_SELECTOR="app=skywalking,component=oap"
+export REPORT_DIR="$REPORT_DIR"
+export ISTIO_VERSION="$ISTIO_VERSION"
+export ALS_ANALYZER="$ALS_ANALYZER"
+export DOCKER_CPUS="$DOCKER_CPUS"
+export DOCKER_MEM_GB="$DOCKER_MEM_GB"
+export DOCKER_SERVER_VERSION="$DOCKER_SERVER_VERSION"
+export DOCKER_OS="$DOCKER_OS"
+export DOCKER_STORAGE_DRIVER="$DOCKER_STORAGE_DRIVER"
+export KIND_VERSION="$KIND_VERSION"
+export KUBECTL_CLIENT_VERSION="${KUBECTL_CLIENT_VERSION:-unknown}"
+export HELM_VERSION="$HELM_VERSION"
+export ISTIOCTL_VERSION="$ISTIOCTL_VERSION"
+export K8S_NODE_MINOR="$K8S_NODE_MINOR"
+export SETUP_BG_PIDS="${SETUP_BG_PIDS[*]}"
+EOF
+
+log "Context written to: $CONTEXT_FILE"
+log "Environment setup complete."
diff --git
a/benchmark/envs-setup/istio-cluster_graalvm-banyandb/traffic-gen.yaml
b/benchmark/envs-setup/istio-cluster_graalvm-banyandb/traffic-gen.yaml
new file mode 100644
index 0000000..2af8dcd
--- /dev/null
+++ b/benchmark/envs-setup/istio-cluster_graalvm-banyandb/traffic-gen.yaml
@@ -0,0 +1,38 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: traffic-gen
+ labels:
+ app: traffic-gen
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: traffic-gen
+ template:
+ metadata:
+ annotations:
+ sidecar.istio.io/inject: "false"
+ labels:
+ app: traffic-gen
+ spec:
+ containers:
+ - name: traffic-gen
+ image: curlimages/curl:latest
+ imagePullPolicy: IfNotPresent
+ command: ["/bin/sh", "-c", "--"]
+ args:
+ - |
+ echo "Waiting 10s for services to stabilize..."
+ sleep 10
+ echo "Starting traffic at ~12 RPS"
+ while true; do
+ curl -s -o /dev/null -w "%{http_code}" \
+ http://istio-ingressgateway.istio-system:80/productpage
+ echo " $(date +%H:%M:%S)"
+ sleep 0.083
+ done
+ resources:
+ requests:
+ cpu: 50m
+ memory: 32Mi
diff --git a/benchmark/envs-setup/istio-cluster_graalvm-banyandb/values.yaml
b/benchmark/envs-setup/istio-cluster_graalvm-banyandb/values.yaml
new file mode 100644
index 0000000..6fc2fe7
--- /dev/null
+++ b/benchmark/envs-setup/istio-cluster_graalvm-banyandb/values.yaml
@@ -0,0 +1,6 @@
+oap:
+ config:
+ # Map Istio service labels to SkyWalking service/instance names.
+ metadata-service-mapping.yaml: |
+ serviceName: ${LABELS."service.istio.io/canonical-name",SERVICE}
+ serviceInstanceName: ${NAME}
diff --git a/benchmark/envs-setup/istio-cluster_oap-banyandb/kind.yaml
b/benchmark/envs-setup/istio-cluster_oap-banyandb/kind.yaml
new file mode 100644
index 0000000..9776927
--- /dev/null
+++ b/benchmark/envs-setup/istio-cluster_oap-banyandb/kind.yaml
@@ -0,0 +1,5 @@
+kind: Cluster
+apiVersion: kind.x-k8s.io/v1alpha4
+nodes:
+ - role: control-plane
+ image:
kindest/node:v1.34.3@sha256:08497ee19eace7b4b5348db5c6a1591d7752b164530a36f855cb0f2bdcbadd48
diff --git a/benchmark/envs-setup/istio-cluster_oap-banyandb/setup.sh
b/benchmark/envs-setup/istio-cluster_oap-banyandb/setup.sh
new file mode 100755
index 0000000..93b4a8c
--- /dev/null
+++ b/benchmark/envs-setup/istio-cluster_oap-banyandb/setup.sh
@@ -0,0 +1,389 @@
+#!/usr/bin/env bash
+# Environment setup: OAP (JVM) cluster + BanyanDB + Istio ALS on Kind.
+#
+# Deploys Istio with Access Log Service (ALS) enabled, SkyWalking OAP
+# cluster receiving telemetry from Envoy sidecars, and Istio Bookinfo
+# sample app as the workload.
+#
+# The context file is written to: $REPORT_DIR/env-context.sh
+
+set -euo pipefail
+
+SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
+
+NAMESPACE="istio-system"
+CLUSTER_NAME="benchmark-cluster"
+
+# Load benchmark environment configuration (image repos, versions)
+BENCHMARKS_DIR="$(cd "$SCRIPT_DIR/../.." && pwd)"
+source "$BENCHMARKS_DIR/env"
+
+OAP_IMAGE_REPO="$SW_OAP_IMAGE_REPO"
+OAP_IMAGE_TAG="$SW_OAP_IMAGE_TAG"
+
+# Kind / K8s compatibility
+KIND_MIN_VERSION="0.25.0"
+HELM_MIN_VERSION="3.12.0"
+K8S_NODE_MINOR="1.34"
+
+log() { echo "[$(date +%H:%M:%S)] $*"; }
+
+version_gte() {
+ local IFS=.
+ local i a=($1) b=($2)
+ for ((i = 0; i < ${#b[@]}; i++)); do
+ local va=${a[i]:-0}
+ local vb=${b[i]:-0}
+ if ((va > vb)); then return 0; fi
+ if ((va < vb)); then return 1; fi
+ done
+ return 0
+}
+
+min_kind_for_k8s() {
+ case "$1" in
+ 1.28) echo "0.25.0" ;;
+ 1.29) echo "0.25.0" ;;
+ 1.30) echo "0.27.0" ;;
+ 1.31) echo "0.25.0" ;;
+ 1.32) echo "0.26.0" ;;
+ 1.33) echo "0.27.0" ;;
+ 1.34) echo "0.30.0" ;;
+ 1.35) echo "0.31.0" ;;
+ *) echo "unknown" ;;
+ esac
+}
+
+REPORT_DIR="${REPORT_DIR:?ERROR: REPORT_DIR must be set by the caller}"
+mkdir -p "$REPORT_DIR"
+
+#############################################################################
+# Pre-checks
+#############################################################################
+log "=== Pre-checks ==="
+
+for cmd in kind kubectl helm docker; do
+ if ! command -v "$cmd" &>/dev/null; then
+ echo "ERROR: $cmd not found. Please install it first."
+ exit 1
+ fi
+done
+
+KIND_VERSION=$(kind version | grep -oE '[0-9]+\.[0-9]+\.[0-9]+' | head -1)
+log "kind version: $KIND_VERSION (minimum: $KIND_MIN_VERSION)"
+if ! version_gte "$KIND_VERSION" "$KIND_MIN_VERSION"; then
+ echo "ERROR: kind >= $KIND_MIN_VERSION is required, found $KIND_VERSION"
+ exit 1
+fi
+
+REQUIRED_KIND_FOR_NODE=$(min_kind_for_k8s "$K8S_NODE_MINOR")
+if [ "$REQUIRED_KIND_FOR_NODE" = "unknown" ]; then
+ echo "WARNING: No known kind compatibility data for K8s $K8S_NODE_MINOR.
Proceeding anyway."
+elif ! version_gte "$KIND_VERSION" "$REQUIRED_KIND_FOR_NODE"; then
+ echo "ERROR: K8s $K8S_NODE_MINOR node image requires kind >=
$REQUIRED_KIND_FOR_NODE, found $KIND_VERSION"
+ exit 1
+fi
+log "kind $KIND_VERSION is compatible with K8s $K8S_NODE_MINOR node image."
+
+KUBECTL_CLIENT_VERSION=$(kubectl version --client -o json 2>/dev/null \
+ | grep -oE '"gitVersion":\s*"v([0-9]+\.[0-9]+)' | head -1 | grep -oE
'[0-9]+\.[0-9]+')
+if [ -n "$KUBECTL_CLIENT_VERSION" ]; then
+ KUBECTL_MINOR=$(echo "$KUBECTL_CLIENT_VERSION" | cut -d. -f2)
+ NODE_MINOR=$(echo "$K8S_NODE_MINOR" | cut -d. -f2)
+ SKEW=$((KUBECTL_MINOR - NODE_MINOR))
+ if [ "$SKEW" -lt 0 ]; then SKEW=$((-SKEW)); fi
+ log "kubectl client: $KUBECTL_CLIENT_VERSION, K8s node: $K8S_NODE_MINOR
(skew: $SKEW)"
+ if [ "$SKEW" -gt 1 ]; then
+ echo "ERROR: kubectl version $KUBECTL_CLIENT_VERSION is too far from
K8s $K8S_NODE_MINOR (max ±1 minor)."
+ exit 1
+ fi
+else
+ echo "WARNING: Could not determine kubectl client version, skipping skew
check."
+fi
+
+HELM_VERSION=$(helm version --short 2>/dev/null | grep -oE
'[0-9]+\.[0-9]+\.[0-9]+' | head -1)
+log "Helm version: $HELM_VERSION (minimum: $HELM_MIN_VERSION)"
+if ! version_gte "$HELM_VERSION" "$HELM_MIN_VERSION"; then
+ echo "ERROR: Helm >= $HELM_MIN_VERSION is required, found $HELM_VERSION"
+ exit 1
+fi
+
+ISTIOCTL_LOCAL_VERSION=$(istioctl version --remote=false 2>/dev/null | head -1
|| echo "none")
+if [ "$ISTIOCTL_LOCAL_VERSION" != "$ISTIO_VERSION" ]; then
+ log "istioctl version mismatch: have $ISTIOCTL_LOCAL_VERSION, need
$ISTIO_VERSION. Downloading..."
+ ISTIO_DOWNLOAD_DIR="$BENCHMARKS_DIR/.istio"
+ mkdir -p "$ISTIO_DOWNLOAD_DIR"
+ if [ ! -f "$ISTIO_DOWNLOAD_DIR/istio-${ISTIO_VERSION}/bin/istioctl" ]; then
+ (cd "$ISTIO_DOWNLOAD_DIR" && export ISTIO_VERSION && curl -sL
https://istio.io/downloadIstio | sh -)
+ fi
+ if [ ! -f "$ISTIO_DOWNLOAD_DIR/istio-${ISTIO_VERSION}/bin/istioctl" ]; then
+ echo "ERROR: Failed to download istioctl $ISTIO_VERSION"
+ exit 1
+ fi
+ export PATH="$ISTIO_DOWNLOAD_DIR/istio-${ISTIO_VERSION}/bin:$PATH"
+ ISTIOCTL_VERSION=$(istioctl version --remote=false 2>/dev/null | head -1
|| echo "unknown")
+ log "Using downloaded istioctl: $ISTIOCTL_VERSION"
+else
+ ISTIOCTL_VERSION="$ISTIOCTL_LOCAL_VERSION"
+ log "istioctl version: $ISTIOCTL_VERSION"
+fi
+
+log "All version checks passed."
+
+DOCKER_CPUS=$(docker info --format '{{.NCPU}}' 2>/dev/null || echo "unknown")
+DOCKER_MEM_BYTES=$(docker info --format '{{.MemTotal}}' 2>/dev/null || echo
"0")
+if [ "$DOCKER_MEM_BYTES" -gt 0 ] 2>/dev/null; then
+ DOCKER_MEM_GB=$(awk "BEGIN {printf \"%.1f\", $DOCKER_MEM_BYTES /
1073741824}")
+else
+ DOCKER_MEM_GB="unknown"
+fi
+DOCKER_SERVER_VERSION=$(docker info --format '{{.ServerVersion}}' 2>/dev/null
|| echo "unknown")
+DOCKER_OS=$(docker info --format '{{.OperatingSystem}}' 2>/dev/null || echo
"unknown")
+DOCKER_STORAGE_DRIVER=$(docker info --format '{{.Driver}}' 2>/dev/null || echo
"unknown")
+log "Docker: ${DOCKER_CPUS} CPUs, ${DOCKER_MEM_GB} GB memory (server:
${DOCKER_SERVER_VERSION}, ${DOCKER_OS})"
+
+if [ "$DOCKER_MEM_BYTES" -gt 0 ] 2>/dev/null; then
+ MIN_MEM_BYTES=$((4 * 1073741824))
+ if [ "$DOCKER_MEM_BYTES" -lt "$MIN_MEM_BYTES" ]; then
+ echo "WARNING: Docker has only ${DOCKER_MEM_GB} GB memory. Recommend
>= 4 GB for this benchmark."
+ fi
+fi
+
+#############################################################################
+# Boot Kind cluster
+#############################################################################
+log "=== Booting Kind cluster ==="
+
+if kind get clusters 2>/dev/null | grep -q "^${CLUSTER_NAME}$"; then
+ log "Kind cluster '$CLUSTER_NAME' already exists, reusing."
+else
+ log "Creating Kind cluster '$CLUSTER_NAME'..."
+ kind create cluster --name "$CLUSTER_NAME" --config "$SCRIPT_DIR/kind.yaml"
+fi
+
+BANYANDB_IMAGE="${SW_BANYANDB_IMAGE_REPO}:${SW_BANYANDB_IMAGE_TAG}"
+OAP_IMAGE="${OAP_IMAGE_REPO}:${OAP_IMAGE_TAG}"
+UI_IMAGE="${SW_UI_IMAGE_REPO}:${SW_UI_IMAGE_TAG}"
+IMAGES=(
+ "$OAP_IMAGE"
+ "$UI_IMAGE"
+ "$BANYANDB_IMAGE"
+ "docker.io/istio/pilot:${ISTIO_VERSION}"
+ "docker.io/istio/proxyv2:${ISTIO_VERSION}"
+ "docker.io/istio/examples-bookinfo-productpage-v1:1.20.2"
+ "docker.io/istio/examples-bookinfo-details-v1:1.20.2"
+ "docker.io/istio/examples-bookinfo-reviews-v1:1.20.2"
+ "docker.io/istio/examples-bookinfo-reviews-v2:1.20.2"
+ "docker.io/istio/examples-bookinfo-reviews-v3:1.20.2"
+ "docker.io/istio/examples-bookinfo-ratings-v1:1.20.2"
+ "docker.io/istio/examples-bookinfo-ratings-v2:1.20.2"
+ "docker.io/istio/examples-bookinfo-mongodb:1.20.2"
+ "curlimages/curl:latest"
+ "registry.k8s.io/metrics-server/metrics-server:v0.8.1"
+)
+log "Pulling images on host (if not cached)..."
+for img in "${IMAGES[@]}"; do
+ pulled=false
+ for attempt in 1 2 3; do
+ if docker pull "$img" -q 2>/dev/null; then
+ pulled=true
+ break
+ fi
+ [ "$attempt" -lt 3 ] && sleep 5
+ done
+ if [ "$pulled" = false ]; then
+ log " WARNING: failed to pull $img after 3 attempts"
+ fi
+done
+
+log "Loading images into Kind..."
+for img in "${IMAGES[@]}"; do
+ kind load docker-image "$img" --name "$CLUSTER_NAME" 2>/dev/null || log "
WARNING: failed to load $img"
+done
+
+#############################################################################
+# Install Istio
+#############################################################################
+log "=== Installing Istio $ISTIO_VERSION ==="
+
+istioctl install -y --set profile=demo \
+ --set
meshConfig.defaultConfig.envoyAccessLogService.address=skywalking-oap.${NAMESPACE}:11800
\
+ --set meshConfig.enableEnvoyAccessLogService=true
+
+kubectl label namespace default istio-injection=enabled --overwrite
+
+#############################################################################
+# Deploy SkyWalking via Helm
+#############################################################################
+log "=== Deploying SkyWalking (OAP x2 + BanyanDB + UI) via Helm ==="
+
+helm -n "$NAMESPACE" upgrade --install skywalking \
+ "$SW_HELM_CHART" \
+ --version "0.0.0-${SW_KUBERNETES_COMMIT_SHA}" \
+ --set fullnameOverride=skywalking \
+ --set oap.replicas=2 \
+ --set oap.image.repository="$OAP_IMAGE_REPO" \
+ --set oap.image.tag="$OAP_IMAGE_TAG" \
+ --set oap.storageType=banyandb \
+ --set oap.env.SW_ENVOY_METRIC_ALS_HTTP_ANALYSIS="$ALS_ANALYZER" \
+ --set oap.env.SW_ENVOY_METRIC_ALS_TCP_ANALYSIS="$ALS_ANALYZER" \
+ --set oap.env.SW_HEALTH_CHECKER=default \
+ --set oap.env.SW_TELEMETRY=prometheus \
+ --set oap.envoy.als.enabled=true \
+ --set ui.image.repository="$SW_UI_IMAGE_REPO" \
+ --set ui.image.tag="$SW_UI_IMAGE_TAG" \
+ --set elasticsearch.enabled=false \
+ --set banyandb.enabled=true \
+ --set banyandb.image.repository="$SW_BANYANDB_IMAGE_REPO" \
+ --set banyandb.image.tag="$SW_BANYANDB_IMAGE_TAG" \
+ --set banyandb.standalone.enabled=true \
+ --timeout 1200s \
+ -f "$SCRIPT_DIR/values.yaml"
+
+log "Waiting for BanyanDB to be ready..."
+kubectl -n "$NAMESPACE" wait --for=condition=ready pod -l
app.kubernetes.io/name=banyandb --timeout=300s
+
+log "Waiting for OAP init job to complete..."
+for i in $(seq 1 60); do
+ if kubectl -n "$NAMESPACE" get jobs -l component=skywalking-job -o
jsonpath='{.items[0].status.succeeded}' 2>/dev/null | grep -q '1'; then
+ log "OAP init job succeeded."
+ break
+ fi
+ if [ "$i" -eq 60 ]; then
+ echo "ERROR: OAP init job did not complete within 300s."
+ kubectl -n "$NAMESPACE" get pods -l component=skywalking-job
2>/dev/null
+ exit 1
+ fi
+ sleep 5
+done
+
+log "Waiting for OAP pods to be ready..."
+kubectl -n "$NAMESPACE" wait --for=condition=ready pod -l
app=skywalking,component=oap --timeout=300s
+
+#############################################################################
+# Capture K8s node resources
+#############################################################################
+log "Capturing node resource info..."
+kubectl get nodes -o json | awk '
+ BEGIN { print "--- K8s Node Resources ---" }
+ /"capacity":/ { cap=1 } /"allocatable":/ { alloc=1 }
+ cap && /"cpu":/ { gsub(/[",]/, ""); printf " capacity.cpu: %s\n",
$2; cap=0 }
+ cap && /"memory":/ { gsub(/[",]/, ""); printf " capacity.memory:
%s\n", $2; cap=0 }
+ cap && /"ephemeral-storage":/ { gsub(/[",]/, ""); printf "
capacity.storage: %s\n", $2; cap=0 }
+ cap && /"pods":/ { gsub(/[",]/, ""); printf " capacity.pods:
%s\n", $2; cap=0 }
+ alloc && /"cpu":/ { gsub(/[",]/, ""); printf " allocatable.cpu:
%s\n", $2; alloc=0 }
+ alloc && /"memory":/ { gsub(/[",]/, ""); printf " allocatable.memory:
%s\n", $2; alloc=0 }
+' > "$REPORT_DIR/node-resources.txt"
+kubectl describe node | sed -n '/Allocated resources/,/Events/p' \
+ >> "$REPORT_DIR/node-resources.txt" 2>/dev/null || true
+
+#############################################################################
+# Deploy Istio Bookinfo sample app
+#############################################################################
+log "=== Deploying Bookinfo sample app ==="
+
+BOOKINFO_BASE="https://raw.githubusercontent.com/istio/istio/${ISTIO_VERSION}/samples/bookinfo"
+
+kubectl apply -f "${BOOKINFO_BASE}/platform/kube/bookinfo.yaml"
+kubectl apply -f "${BOOKINFO_BASE}/networking/bookinfo-gateway.yaml"
+kubectl apply -f "${BOOKINFO_BASE}/platform/kube/bookinfo-ratings-v2.yaml"
+kubectl apply -f "${BOOKINFO_BASE}/platform/kube/bookinfo-db.yaml"
+kubectl apply -f "${BOOKINFO_BASE}/networking/destination-rule-all.yaml"
+kubectl apply -f "${BOOKINFO_BASE}/networking/virtual-service-ratings-db.yaml"
+
+log "Waiting for Bookinfo pods to be ready..."
+kubectl -n default wait --for=condition=ready pod -l app=productpage
--timeout=300s
+kubectl -n default wait --for=condition=ready pod -l app=details --timeout=300s
+kubectl -n default wait --for=condition=ready pod -l app=ratings,version=v1
--timeout=300s
+kubectl -n default wait --for=condition=ready pod -l app=reviews,version=v1
--timeout=300s
+kubectl -n default wait --for=condition=ready pod -l app=reviews,version=v2
--timeout=300s
+
+log "Deploying traffic generator..."
+kubectl apply -f "$SCRIPT_DIR/traffic-gen.yaml"
+kubectl -n default wait --for=condition=ready pod -l app=traffic-gen
--timeout=60s
+
+#############################################################################
+# Cluster health check
+#############################################################################
+log "=== Cluster health check (remote_out_count) ==="
+log "Waiting 30s for traffic to flow..."
+sleep 30
+
+OAP_PODS_CHECK=($(kubectl -n "$NAMESPACE" get pods -l
app=skywalking,component=oap -o jsonpath='{.items[*].metadata.name}'))
+EXPECTED_NODES=${#OAP_PODS_CHECK[@]}
+CLUSTER_HEALTHY=true
+
+CURL_IMAGE="curlimages/curl:latest"
+for pod in "${OAP_PODS_CHECK[@]}"; do
+ log " Checking $pod..."
+ POD_IP=$(kubectl -n "$NAMESPACE" get pod "$pod" -o
jsonpath='{.status.podIP}')
+ METRICS=$(kubectl -n "$NAMESPACE" run "health-check-${pod##*-}" --rm -i
--restart=Never \
+ --image="$CURL_IMAGE" -- curl -s "http://${POD_IP}:1234/metrics"
2>/dev/null) || METRICS=""
+ REMOTE_OUT=$(echo "$METRICS" | grep '^remote_out_count{' || true)
+
+ if [ -z "$REMOTE_OUT" ]; then
+ REMOTE_IN=$(echo "$METRICS" | grep '^remote_in_count{' || true)
+ if [ -n "$REMOTE_IN" ]; then
+ log " $pod: no remote_out_count but has remote_in_count
(receiver-only node)"
+ else
+ log " WARNING: $pod has no remote_out_count or remote_in_count"
+ CLUSTER_HEALTHY=false
+ fi
+ else
+ DEST_COUNT=$(echo "$REMOTE_OUT" | wc -l | tr -d ' ')
+ log " $pod: $DEST_COUNT dest(s)"
+ fi
+done
+
+if [ "$CLUSTER_HEALTHY" = true ]; then
+ log "Cluster health check passed."
+else
+ log "WARNING: Cluster health check has issues. Proceeding anyway."
+fi
+
+#############################################################################
+# Port-forward OAP for local queries
+#############################################################################
+log "Setting up port-forwards..."
+kubectl -n "$NAMESPACE" port-forward svc/skywalking-oap 12800:12800 &
+SETUP_BG_PIDS=($!)
+kubectl -n "$NAMESPACE" port-forward svc/skywalking-ui 8080:80 &
+SETUP_BG_PIDS+=($!)
+sleep 3
+
+log "Environment is up. OAP at localhost:12800, UI at localhost:8080"
+
+#############################################################################
+# Write context file
+#############################################################################
+CONTEXT_FILE="$REPORT_DIR/env-context.sh"
+cat > "$CONTEXT_FILE" <<EOF
+# Auto-generated by envs-setup/istio-cluster_oap-banyandb/setup.sh
+export ENV_NAME="istio-cluster_oap-banyandb"
+export OAP_VARIANT="jvm"
+export NAMESPACE="$NAMESPACE"
+export CLUSTER_NAME="$CLUSTER_NAME"
+export OAP_HOST="localhost"
+export OAP_PORT="12800"
+export OAP_GRPC_PORT="11800"
+export UI_HOST="localhost"
+export UI_PORT="8080"
+export OAP_SELECTOR="app=skywalking,component=oap"
+export REPORT_DIR="$REPORT_DIR"
+export ISTIO_VERSION="$ISTIO_VERSION"
+export ALS_ANALYZER="$ALS_ANALYZER"
+export DOCKER_CPUS="$DOCKER_CPUS"
+export DOCKER_MEM_GB="$DOCKER_MEM_GB"
+export DOCKER_SERVER_VERSION="$DOCKER_SERVER_VERSION"
+export DOCKER_OS="$DOCKER_OS"
+export DOCKER_STORAGE_DRIVER="$DOCKER_STORAGE_DRIVER"
+export KIND_VERSION="$KIND_VERSION"
+export KUBECTL_CLIENT_VERSION="${KUBECTL_CLIENT_VERSION:-unknown}"
+export HELM_VERSION="$HELM_VERSION"
+export ISTIOCTL_VERSION="$ISTIOCTL_VERSION"
+export K8S_NODE_MINOR="$K8S_NODE_MINOR"
+export SETUP_BG_PIDS="${SETUP_BG_PIDS[*]}"
+EOF
+
+log "Context written to: $CONTEXT_FILE"
+log "Environment setup complete."
diff --git a/benchmark/envs-setup/istio-cluster_oap-banyandb/traffic-gen.yaml
b/benchmark/envs-setup/istio-cluster_oap-banyandb/traffic-gen.yaml
new file mode 100644
index 0000000..2af8dcd
--- /dev/null
+++ b/benchmark/envs-setup/istio-cluster_oap-banyandb/traffic-gen.yaml
@@ -0,0 +1,38 @@
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: traffic-gen
+ labels:
+ app: traffic-gen
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: traffic-gen
+ template:
+ metadata:
+ annotations:
+ sidecar.istio.io/inject: "false"
+ labels:
+ app: traffic-gen
+ spec:
+ containers:
+ - name: traffic-gen
+ image: curlimages/curl:latest
+ imagePullPolicy: IfNotPresent
+ command: ["/bin/sh", "-c", "--"]
+ args:
+ - |
+ echo "Waiting 10s for services to stabilize..."
+ sleep 10
+ echo "Starting traffic at ~12 RPS"
+ while true; do
+ curl -s -o /dev/null -w "%{http_code}" \
+ http://istio-ingressgateway.istio-system:80/productpage
+ echo " $(date +%H:%M:%S)"
+ sleep 0.083
+ done
+ resources:
+ requests:
+ cpu: 50m
+ memory: 32Mi
diff --git a/benchmark/envs-setup/istio-cluster_oap-banyandb/values.yaml
b/benchmark/envs-setup/istio-cluster_oap-banyandb/values.yaml
new file mode 100644
index 0000000..6fc2fe7
--- /dev/null
+++ b/benchmark/envs-setup/istio-cluster_oap-banyandb/values.yaml
@@ -0,0 +1,6 @@
+oap:
+ config:
+ # Map Istio service labels to SkyWalking service/instance names.
+ metadata-service-mapping.yaml: |
+ serviceName: ${LABELS."service.istio.io/canonical-name",SERVICE}
+ serviceInstanceName: ${NAME}
diff --git a/benchmark/run.sh b/benchmark/run.sh
new file mode 100755
index 0000000..4180e67
--- /dev/null
+++ b/benchmark/run.sh
@@ -0,0 +1,138 @@
+#!/usr/bin/env bash
+# Benchmark runner — single entry point with two modes.
+#
+# Mode 1: Setup environment only
+# ./benchmark/run.sh setup <env-name>
+#
+# Mode 2: Setup environment + run benchmark case
+# ./benchmark/run.sh run <env-name> <case-name>
+#
+# Available environments: (ls benchmark/envs-setup/)
+# Available cases: (ls benchmark/cases/)
+
+set -euo pipefail
+
+BENCHMARKS_DIR="$(cd "$(dirname "$0")" && pwd)"
+TIMESTAMP="$(date +%Y%m%d-%H%M%S)"
+
+source "$BENCHMARKS_DIR/env" 2>/dev/null || true
+CLUSTER_NAME="${CLUSTER_NAME:-benchmark-cluster}"
+
+cleanup_cluster() {
+ echo ""
+ echo ">>> Cleaning up..."
+ if kind get clusters 2>/dev/null | grep -q "^${CLUSTER_NAME}$"; then
+ echo " Deleting Kind cluster '${CLUSTER_NAME}'..."
+ kind delete cluster --name "$CLUSTER_NAME" 2>&1 || true
+ fi
+ echo " Pruning dangling Docker resources..."
+ docker image prune -f 2>&1 | tail -1 || true
+ docker volume prune -f 2>&1 | tail -1 || true
+ echo ">>> Cleanup complete."
+}
+
+usage() {
+ echo "Usage:"
+ echo " $0 setup <env-name> Setup environment only"
+ echo " $0 run <env-name> <case-name> Setup environment + run benchmark
case"
+ echo ""
+ echo "Available environments:"
+ for d in "$BENCHMARKS_DIR"/envs-setup/*/; do
+ [ -d "$d" ] && echo " $(basename "$d")"
+ done
+ echo ""
+ echo "Available cases:"
+ for d in "$BENCHMARKS_DIR"/cases/*/; do
+ [ -d "$d" ] && echo " $(basename "$d")"
+ done
+ exit 1
+}
+
+if [ $# -lt 2 ]; then
+ usage
+fi
+
+MODE="$1"
+ENV_NAME="$2"
+
+ENV_DIR="$BENCHMARKS_DIR/envs-setup/$ENV_NAME"
+if [ ! -d "$ENV_DIR" ] || [ ! -f "$ENV_DIR/setup.sh" ]; then
+ echo "ERROR: Environment '$ENV_NAME' not found."
+ echo " Expected: $ENV_DIR/setup.sh"
+ exit 1
+fi
+
+case "$MODE" in
+ setup)
+ export REPORT_DIR="$BENCHMARKS_DIR/reports/$ENV_NAME/$TIMESTAMP"
+ mkdir -p "$REPORT_DIR"
+
+ setup_cleanup_on_error() {
+ local rc=$?
+ if [ $rc -ne 0 ]; then
+ cleanup_cluster
+ fi
+ }
+ trap setup_cleanup_on_error EXIT
+
+ echo "=== Setting up environment: $ENV_NAME ==="
+ echo " Report dir: $REPORT_DIR"
+ echo ""
+
+ "$ENV_DIR/setup.sh"
+
+ echo ""
+ echo "=== Environment ready ==="
+ echo " Context file: $REPORT_DIR/env-context.sh"
+ echo ""
+ echo "To run a benchmark case against this environment:"
+ echo " $0 run $ENV_NAME <case-name>"
+ echo ""
+ echo "To tear down when done:"
+ echo " kind delete cluster --name $CLUSTER_NAME"
+ ;;
+
+ run)
+ if [ $# -lt 3 ]; then
+ echo "ERROR: 'run' mode requires both <env-name> and <case-name>."
+ echo ""
+ usage
+ fi
+ CASE_NAME="$3"
+
+ CASE_DIR="$BENCHMARKS_DIR/cases/$CASE_NAME"
+ if [ ! -d "$CASE_DIR" ] || [ ! -f "$CASE_DIR/run.sh" ]; then
+ echo "ERROR: Case '$CASE_NAME' not found."
+ echo " Expected: $CASE_DIR/run.sh"
+ exit 1
+ fi
+
+ export
REPORT_DIR="$BENCHMARKS_DIR/reports/$ENV_NAME/$CASE_NAME/$TIMESTAMP"
+ mkdir -p "$REPORT_DIR"
+
+ trap cleanup_cluster EXIT
+
+ echo "=== Benchmark: $CASE_NAME on $ENV_NAME ==="
+ echo " Report dir: $REPORT_DIR"
+ echo ""
+
+ echo ">>> Setting up environment: $ENV_NAME"
+ "$ENV_DIR/setup.sh"
+
+ CONTEXT_FILE="$REPORT_DIR/env-context.sh"
+ if [ ! -f "$CONTEXT_FILE" ]; then
+ echo "ERROR: setup.sh did not produce $CONTEXT_FILE"
+ exit 1
+ fi
+
+ echo ""
+ echo ">>> Running case: $CASE_NAME"
+ "$CASE_DIR/run.sh" "$CONTEXT_FILE"
+ ;;
+
+ *)
+ echo "ERROR: Unknown mode '$MODE'."
+ echo ""
+ usage
+ ;;
+esac
diff --git a/build-tools/build-common/pom.xml b/build-tools/build-common/pom.xml
index 33eef7e..5214fc4 100644
--- a/build-tools/build-common/pom.xml
+++ b/build-tools/build-common/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>build-tools</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>build-common</artifactId>
diff --git a/build-tools/config-generator/pom.xml
b/build-tools/config-generator/pom.xml
index faf73dc..7c950fa 100644
--- a/build-tools/config-generator/pom.xml
+++ b/build-tools/config-generator/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>build-tools</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>config-generator</artifactId>
diff --git a/build-tools/pom.xml b/build-tools/pom.xml
index 65e9536..41ce0de 100644
--- a/build-tools/pom.xml
+++ b/build-tools/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>skywalking-graalvm-distro</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>build-tools</artifactId>
diff --git a/build-tools/precompiler/pom.xml b/build-tools/precompiler/pom.xml
index 186ba8c..971358e 100644
--- a/build-tools/precompiler/pom.xml
+++ b/build-tools/precompiler/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>build-tools</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>precompiler</artifactId>
diff --git a/changes/changes.md b/changes/changes.md
index 6f0856b..89f1294 100644
--- a/changes/changes.md
+++ b/changes/changes.md
@@ -1,6 +1,6 @@
# Changes
-## 1.0.0
+## 0.1.0
### Highlights
@@ -68,7 +68,21 @@ This is the initial release, built on top of Apache
SkyWalking OAP server.
- Istio ALS test: Envoy access log service integration.
- Event, menu, alarm, log, meter, trace-profiling, telegraf, zabbix, and
zipkin test cases.
+### Release Tooling
+
+- `release/pre-release.sh`: bump Maven version from SNAPSHOT to release, tag,
and bump to next SNAPSHOT.
+- `release/release.sh`: create source tarball, build macOS native binary
locally, download Linux binaries from GitHub Release, GPG sign all artifacts.
+
+### Benchmark
+
+- Local boot test: cold/warm startup time and idle memory comparison (JVM vs
GraalVM).
+- Kubernetes resource usage test: CPU and memory under sustained ~12 RPS
traffic on Kind + Istio + Bookinfo.
+- CPM validation: verify entry service call rate matches expected traffic.
+
### CI/CD
-- GitHub Actions CI: build, test, license check, and E2E tests.
-- Release workflow: manual trigger with commit SHA, multi-arch Linux + macOS
builds, Docker manifest with version and commit tags, GitHub Release page with
checksums.
+- Unified CI/release workflow: push to main, tag push, PR, and manual
`workflow_dispatch` with optional commit SHA and version.
+- Dual Docker registry: push to both GHCR and Docker Hub (Docker Hub on
release only).
+- Multi-arch Docker manifest: `linux/amd64` and `linux/arm64` via
push-by-digest and `imagetools create`.
+- GitHub Release page: auto-upload tarballs with SHA-512 checksums and
changelog from `changes/`.
+- 12 E2E test cases on CI (non-release builds).
diff --git a/docs/README.md b/docs/README.md
index cabe177..ae1860a 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -1,10 +1,25 @@
# SkyWalking GraalVM Distro
-**GraalVM native-image distribution of Apache SkyWalking OAP Server.**
+**GraalVM native-image distribution of [Apache
SkyWalking](https://skywalking.apache.org/) OAP Server.**
-A self-contained native binary of the SkyWalking OAP backend — faster startup,
-lower memory, single-binary deployment. All your existing SkyWalking agents,
UI,
-and tooling work unchanged.
+[Apache SkyWalking](https://skywalking.apache.org/) is an open-source APM and
observability platform for
+distributed systems, providing metrics, tracing, logging, and profiling
capabilities.
+The standard SkyWalking OAP server runs on the JVM with runtime code
generation and dynamic module loading.
+
+This project produces a **GraalVM native-image build** of the same OAP server
— a self-contained binary
+with faster startup, lower memory footprint, and single-file deployment. It
wraps the upstream
+SkyWalking repository as a git submodule and moves all runtime code generation
to build time,
+without modifying upstream source code.
+
+All your existing SkyWalking agents, UI, and tooling work unchanged. The
observability features
+are identical — the differences are in how the server is built and deployed:
+
+- **Native binary** instead of JVM — instant startup, ~512MB memory
+- **BanyanDB only** — the sole supported storage backend
+- **Fixed module set** — modules selected at build time, no SPI discovery
+- **Pre-compiled DSL** — OAL, MAL, LAL, and Hierarchy rules compiled at build
time
+
+Official documentation is published at
[skywalking.apache.org/docs](https://skywalking.apache.org/docs/#ExperimentalGraalVMDistro).
## For Users
diff --git a/oap-graalvm-native/pom.xml b/oap-graalvm-native/pom.xml
index 51c5c71..d167bf9 100644
--- a/oap-graalvm-native/pom.xml
+++ b/oap-graalvm-native/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>skywalking-graalvm-distro</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>oap-graalvm-native</artifactId>
diff --git a/oap-graalvm-server/pom.xml b/oap-graalvm-server/pom.xml
index b7d1bb1..d1b202f 100644
--- a/oap-graalvm-server/pom.xml
+++ b/oap-graalvm-server/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>skywalking-graalvm-distro</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>oap-graalvm-server</artifactId>
diff --git a/oap-libs-for-graalvm/agent-analyzer-for-graalvm/pom.xml
b/oap-libs-for-graalvm/agent-analyzer-for-graalvm/pom.xml
index d1c4dd9..25e1770 100644
--- a/oap-libs-for-graalvm/agent-analyzer-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/agent-analyzer-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>agent-analyzer-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/aws-firehose-receiver-for-graalvm/pom.xml
b/oap-libs-for-graalvm/aws-firehose-receiver-for-graalvm/pom.xml
index cd5ab24..7aeb6df 100644
--- a/oap-libs-for-graalvm/aws-firehose-receiver-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/aws-firehose-receiver-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>aws-firehose-receiver-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/cilium-fetcher-for-graalvm/pom.xml
b/oap-libs-for-graalvm/cilium-fetcher-for-graalvm/pom.xml
index 52fcf0c..6048402 100644
--- a/oap-libs-for-graalvm/cilium-fetcher-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/cilium-fetcher-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>cilium-fetcher-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/ebpf-receiver-for-graalvm/pom.xml
b/oap-libs-for-graalvm/ebpf-receiver-for-graalvm/pom.xml
index 14ea389..867892c 100644
--- a/oap-libs-for-graalvm/ebpf-receiver-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/ebpf-receiver-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>ebpf-receiver-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/envoy-metrics-receiver-for-graalvm/pom.xml
b/oap-libs-for-graalvm/envoy-metrics-receiver-for-graalvm/pom.xml
index 1017c97..6c0b725 100644
--- a/oap-libs-for-graalvm/envoy-metrics-receiver-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/envoy-metrics-receiver-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>envoy-metrics-receiver-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/health-checker-for-graalvm/pom.xml
b/oap-libs-for-graalvm/health-checker-for-graalvm/pom.xml
index cb466f4..69c87bd 100644
--- a/oap-libs-for-graalvm/health-checker-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/health-checker-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>health-checker-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/library-module-for-graalvm/pom.xml
b/oap-libs-for-graalvm/library-module-for-graalvm/pom.xml
index 9b907fa..d28cd01 100644
--- a/oap-libs-for-graalvm/library-module-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/library-module-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>library-module-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/library-util-for-graalvm/pom.xml
b/oap-libs-for-graalvm/library-util-for-graalvm/pom.xml
index 38ee55e..aa8ec8d 100644
--- a/oap-libs-for-graalvm/library-util-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/library-util-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>library-util-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/log-analyzer-for-graalvm/pom.xml
b/oap-libs-for-graalvm/log-analyzer-for-graalvm/pom.xml
index b6f70f2..0089c94 100644
--- a/oap-libs-for-graalvm/log-analyzer-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/log-analyzer-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>log-analyzer-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/meter-analyzer-for-graalvm/pom.xml
b/oap-libs-for-graalvm/meter-analyzer-for-graalvm/pom.xml
index 7b9eaaf..e37349f 100644
--- a/oap-libs-for-graalvm/meter-analyzer-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/meter-analyzer-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>meter-analyzer-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/otel-receiver-for-graalvm/pom.xml
b/oap-libs-for-graalvm/otel-receiver-for-graalvm/pom.xml
index fcea5c4..d9ee8ac 100644
--- a/oap-libs-for-graalvm/otel-receiver-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/otel-receiver-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>otel-receiver-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/pom.xml b/oap-libs-for-graalvm/pom.xml
index 8d88d67..941366b 100644
--- a/oap-libs-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>skywalking-graalvm-distro</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>oap-libs-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/server-core-for-graalvm/pom.xml
b/oap-libs-for-graalvm/server-core-for-graalvm/pom.xml
index dbf17d9..46a65bd 100644
--- a/oap-libs-for-graalvm/server-core-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/server-core-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>server-core-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/server-starter-for-graalvm/pom.xml
b/oap-libs-for-graalvm/server-starter-for-graalvm/pom.xml
index 02d013e..f41b47a 100644
--- a/oap-libs-for-graalvm/server-starter-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/server-starter-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>server-starter-for-graalvm</artifactId>
diff --git a/oap-libs-for-graalvm/status-query-for-graalvm/pom.xml
b/oap-libs-for-graalvm/status-query-for-graalvm/pom.xml
index 83358ad..84c0be9 100644
--- a/oap-libs-for-graalvm/status-query-for-graalvm/pom.xml
+++ b/oap-libs-for-graalvm/status-query-for-graalvm/pom.xml
@@ -24,7 +24,7 @@
<parent>
<groupId>org.apache.skywalking</groupId>
<artifactId>oap-libs-for-graalvm</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
</parent>
<artifactId>status-query-for-graalvm</artifactId>
diff --git a/pom.xml b/pom.xml
index 71e8c80..3a98653 100644
--- a/pom.xml
+++ b/pom.xml
@@ -23,7 +23,7 @@
<groupId>org.apache.skywalking</groupId>
<artifactId>skywalking-graalvm-distro</artifactId>
- <version>1.0.0-SNAPSHOT</version>
+ <version>0.1.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>SkyWalking GraalVM Distro</name>
diff --git a/release/pre-release.sh b/release/pre-release.sh
new file mode 100755
index 0000000..0a9005e
--- /dev/null
+++ b/release/pre-release.sh
@@ -0,0 +1,109 @@
+#!/usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements. See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License. You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+set -euo pipefail
+
+SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
+REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
+
+# ─── Helpers ─────────────────────────────────────────────────────────────────
+log() { echo "===> $*"; }
+error() { echo "ERROR: $*" >&2; exit 1; }
+
+# ─── Read current version from root pom.xml ──────────────────────────────────
+CURRENT_VERSION=$(sed -n 's/.*<version>\(.*\)<\/version>.*/\1/p'
"${REPO_ROOT}/pom.xml" | head -1)
+[[ -n "${CURRENT_VERSION}" ]] || error "Could not read version from pom.xml"
+[[ "${CURRENT_VERSION}" == *-SNAPSHOT ]] || error "Current version
'${CURRENT_VERSION}' is not a SNAPSHOT version"
+
+log "Current version: ${CURRENT_VERSION}"
+
+# ─── Step 1: Determine release version ───────────────────────────────────────
+DEFAULT_RELEASE_VERSION="${CURRENT_VERSION%-SNAPSHOT}"
+
+echo ""
+read -r -p "Release version [${DEFAULT_RELEASE_VERSION}]: " RELEASE_VERSION
+RELEASE_VERSION="${RELEASE_VERSION:-${DEFAULT_RELEASE_VERSION}}"
+[[ "${RELEASE_VERSION}" != *-SNAPSHOT ]] || error "Release version must not
contain -SNAPSHOT"
+
+# ─── Step 2: Determine next development version ─────────────────────────────
+# Default: bump minor version (0.1.0 -> 0.2.0-SNAPSHOT)
+IFS='.' read -r MAJOR MINOR PATCH <<< "${RELEASE_VERSION}"
+DEFAULT_NEXT_VERSION="${MAJOR}.$((MINOR + 1)).0-SNAPSHOT"
+
+echo ""
+read -r -p "Next development version [${DEFAULT_NEXT_VERSION}]: " NEXT_VERSION
+NEXT_VERSION="${NEXT_VERSION:-${DEFAULT_NEXT_VERSION}}"
+[[ "${NEXT_VERSION}" == *-SNAPSHOT ]] || error "Next development version must
end with -SNAPSHOT"
+
+# ─── Confirm ─────────────────────────────────────────────────────────────────
+echo ""
+echo "Summary:"
+echo " Current version : ${CURRENT_VERSION}"
+echo " Release version : ${RELEASE_VERSION}"
+echo " Next dev version : ${NEXT_VERSION}"
+echo " Tag : v${RELEASE_VERSION}"
+echo ""
+read -r -p "Proceed? [y/N] " confirm
+[[ "${confirm}" =~ ^[Yy]$ ]] || { echo "Aborted."; exit 0; }
+
+# ─── Step 3: Bump to release version ────────────────────────────────────────
+log "Bumping version to ${RELEASE_VERSION}..."
+
+find "${REPO_ROOT}" -name pom.xml -not -path '*/skywalking/*' \
+ -exec sed -i '' "s/${CURRENT_VERSION}/${RELEASE_VERSION}/g" {} \;
+
+# Verify the change
+VERIFY_VERSION=$(sed -n 's/.*<version>\(.*\)<\/version>.*/\1/p'
"${REPO_ROOT}/pom.xml" | head -1)
+[[ "${VERIFY_VERSION}" == "${RELEASE_VERSION}" ]] || error "Version bump
failed. pom.xml shows '${VERIFY_VERSION}'"
+
+log "Committing release version..."
+cd "${REPO_ROOT}"
+git add -A '*.pom.xml' 2>/dev/null || true
+git add $(find . -name pom.xml -not -path '*/skywalking/*')
+git commit -m "Release ${RELEASE_VERSION}"
+
+log "Creating tag v${RELEASE_VERSION}..."
+git tag "v${RELEASE_VERSION}"
+
+# ─── Step 4: Bump to next development version ───────────────────────────────
+log "Bumping version to ${NEXT_VERSION}..."
+
+find "${REPO_ROOT}" -name pom.xml -not -path '*/skywalking/*' \
+ -exec sed -i '' "s/${RELEASE_VERSION}/${NEXT_VERSION}/g" {} \;
+
+VERIFY_VERSION=$(sed -n 's/.*<version>\(.*\)<\/version>.*/\1/p'
"${REPO_ROOT}/pom.xml" | head -1)
+[[ "${VERIFY_VERSION}" == "${NEXT_VERSION}" ]] || error "Version bump failed.
pom.xml shows '${VERIFY_VERSION}'"
+
+log "Committing next development version..."
+git add $(find . -name pom.xml -not -path '*/skywalking/*')
+git commit -m "Bump version to ${NEXT_VERSION}"
+
+# ─── Summary ─────────────────────────────────────────────────────────────────
+echo ""
+log "Pre-release complete!"
+echo ""
+echo "Created:"
+echo " - Commit: Release ${RELEASE_VERSION}"
+echo " - Tag: v${RELEASE_VERSION}"
+echo " - Commit: Bump version to ${NEXT_VERSION}"
+echo ""
+echo "Next steps:"
+echo " 1. Review commits: git log --oneline -3"
+echo " 2. Push tag: git push origin v${RELEASE_VERSION}"
+echo " 3. Push branch: git push origin main"
+echo " 4. Wait for CI release workflow to complete"
+echo " 5. Run: release/release.sh ${RELEASE_VERSION}"
diff --git a/release.sh b/release/release.sh
similarity index 98%
rename from release.sh
rename to release/release.sh
index 526f6b1..0d08250 100755
--- a/release.sh
+++ b/release/release.sh
@@ -20,6 +20,7 @@ set -euo pipefail
ARTIFACT_PREFIX="apache-skywalking-graalvm-distro"
RELEASE_DIR="release-package"
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
+REPO_ROOT="$(cd "${SCRIPT_DIR}/.." && pwd)"
REPO="apache/skywalking-graalvm-distro"
# ─── Usage ───────────────────────────────────────────────────────────────────
@@ -154,9 +155,9 @@ log "Source package created: dist/${SRC_TARBALL}"
# ─── Step 3: Build darwin-arm64 native binary
─────────────────────────────────
log "Building macOS arm64 (Apple Silicon) native binary..."
-make native-image
+(cd "${REPO_ROOT}" && make native-image)
-NATIVE_SRC=$(ls
"${SCRIPT_DIR}"/oap-graalvm-native/target/oap-graalvm-native-*-native-dist.tar.gz)
+NATIVE_SRC=$(ls
"${REPO_ROOT}"/oap-graalvm-native/target/oap-graalvm-native-*-native-dist.tar.gz)
DARWIN_TARBALL="${ARTIFACT_PREFIX}-${VERSION}-darwin-arm64.tar.gz"
cp "${NATIVE_SRC}" "${DIST_DIR}/${DARWIN_TARBALL}"