brbzull0 opened a new pull request, #12892:
URL: https://github.com/apache/trafficserver/pull/12892
## New Configuration Reload Framework
### TL;DR
> __NOTE__: **Backward compatible:** The existing `traffic_ctl config
reload` command works exactly as before —
> same syntax, same behavior from the operator's perspective. Internally, it
now fires the new reload
> logic, which means every reload is automatically tracked, timed, and
queryable.
**New `traffic_ctl` commands:**
```bash
# Basic reload — works exactly as before, but now returns a token for
tracking
$ traffic_ctl config reload
✔ Reload scheduled [rldtk-1739808000000]
Monitor : traffic_ctl config reload -t rldtk-1739808000000 -m
Details : traffic_ctl config reload -t rldtk-1739808000000 -s -l
```
```bash
# Monitor mode with a custom token
$ traffic_ctl config reload -t deploy-v2.1 -m
✔ Reload scheduled [deploy-v2.1]
✔ [deploy-v2.1] ████████████████████ 11/11 success (245ms)
```
```bash
# Full status report
$ traffic_ctl config status -t deploy-v2.1
✔ Reload [success] — deploy-v2.1
Started : 2025 Feb 17 12:00:00.123
Finished: 2025 Feb 17 12:00:00.368
Duration: 245ms
✔ 11 success ◌ 0 in-progress ✗ 0 failed (11 total)
Tasks:
✔ logging.yaml ··························· 120ms
✔ ip_allow.yaml ·························· 18ms
✔ remap.config ··························· 42ms
✔ ssl_client_coordinator ················· 35ms
├─ ✔ sni.yaml ··························· 20ms
└─ ✔ ssl_multicert.config ··············· 15ms
...
```
**Failed reload — monitor mode:**
```bash
$ traffic_ctl config reload -t hotfix-ssl-cert -m
✔ Reload scheduled [hotfix-ssl-cert]
✗ [hotfix-ssl-cert] ██████████████░░░░░░ 9/11 fail (310ms)
Details : traffic_ctl config status -t hotfix-ssl-cert
```
**Failed reload — status report:**
```bash
$ traffic_ctl config status -t hotfix-ssl-cert
✗ Reload [fail] — hotfix-ssl-cert
Started : 2025 Feb 17 14:30:10.500
Finished: 2025 Feb 17 14:30:10.810
Duration: 310ms
✔ 9 success ◌ 0 in-progress ✗ 2 failed (11 total)
Tasks:
✔ ip_allow.yaml ·························· 18ms
✔ remap.config ··························· 42ms
✗ logging.yaml ·························· 120ms ✗ FAIL
✗ ssl_client_coordinator ················· 85ms ✗ FAIL
├─ ✔ sni.yaml ··························· 20ms
└─ ✗ ssl_multicert.config ··············· 65ms ✗ FAIL
...
```
**Inline YAML reload (runtime only, not persisted to disk):**
> **Note:** Inline YAML reload is currently disabled — no config handler
supports `ConfigSource::FileAndRpc`
> yet. The infrastructure is in place and will be enabled as handlers are
migrated. See TODO below.
```bash
$ traffic_ctl config reload -d @ip_allow_new.yaml -t update-ip-rules -m
✔ Reload scheduled [update-ip-rules]
✔ [update-ip-rules] ████████████████████ 1/1 success (18ms)
Note: Inline configuration is NOT persisted to disk.
Server restart will revert to file-based configuration.
```
The `-d` flag accepts `@filename` to read from a file, or `@-` to read from
stdin. The YAML file
uses **registry keys** as top-level keys — the key string passed as the
first argument to
`register_config()` or `register_record_config()`. The content under each
key is the actual YAML
that the config file normally contains — it is passed as-is to the handler
via `ctx.supplied_yaml()`.
A single file can target multiple handlers:
```yaml
# reload_rules.yaml — multiple configs in one file
# Each top-level key is a registry key (as declared in register_config()).
# The value is the full config content, exactly as it appears in the config
file.
ip_allow:
- apply: in
ip_addrs: 0.0.0.0/0
action: allow
methods: ALL
sni:
- fqdn: "*.example.com"
verify_client: NONE
```
```bash
# From file — reloads both ip_allow and sni handlers
$ traffic_ctl config reload -d @reload_rules.yaml -t update-rules -m
# From stdin — pipe YAML directly into ATS
$ cat reload_rules.yaml | traffic_ctl config reload -d @- -m
```
### New `traffic_ctl` Commands
| Command | Description |
|---|---|
| `traffic_ctl config reload` | Trigger a file-based reload. Shows token and
next-step hints. |
| `traffic_ctl config reload -m` | Trigger and monitor with a live progress
bar. |
| `traffic_ctl config reload -s -l` | Trigger and immediately show detailed
report with logs. |
| `traffic_ctl config reload -t <token>` | Reload with a custom token. |
| `traffic_ctl config reload -d @file.yaml` | Inline reload from file
(runtime only, not persisted). |
| `traffic_ctl config reload -d @-` | Inline reload from stdin. |
| `traffic_ctl config reload --force` | Force a new reload even if one is in
progress. |
| `traffic_ctl config status` | Show the last reload status. |
| `traffic_ctl config status -t <token>` | Show status of a specific reload.
|
| `traffic_ctl config status -c all` | Show full reload history. |
### New JSONRPC APIs
| Method | Description |
|---|---|
| `admin_config_reload` | Unified reload — file-based (default) or inline
when `configs` param is present. Params: `token`, `force`, `configs`. |
| `get_reload_config_status` | Query reload status by `token` or get the
last N reloads via `count`. |
**Inline reload RPC example:**
```yaml
jsonrpc: "2.0"
method: "admin_config_reload"
params:
token: "update-ip-and-sni"
configs:
ip_allow:
- apply: in
ip_addrs: 0.0.0.0/0
action: allow
methods: ALL
sni:
- fqdn: "*.example.com"
verify_client: NONE
```
---
### Background: Issues with the Previous Reload Mechanism
The previous configuration reload relied on a loose collection of
independent record callbacks
(`RecRegisterConfigUpdateCb`) wired through `FileManager` and
`AddConfigFilesHere.cc`. Each config
module registered its file independently, and reloads were fire-and-forget:
- **No visibility** — There was no way to know whether a reload succeeded or
failed, which handlers
ran, or how long each one took.
- **No coordination** — Each handler ran independently with no shared
context. There was no concept of
a "reload session" grouping all config updates triggered by a single
request.
- **No inline content** — Configuration could only be reloaded from files on
disk. There was no way
to push YAML content at runtime through the RPC or CLI.
- **Scattered registration** — File registrations were split between
`AddConfigFilesHere.cc` (for
`FileManager`) and individual modules (for record callbacks), making it
hard to reason about which
files were tracked and which records triggered reloads.
- **No token tracking** — There was no identifier for a reload operation, so
you couldn't query the
status of a specific reload or distinguish between overlapping reloads.
### What the New Design Solves
1. **Full reload traceability** — Every reload gets a token. Each config
handler reports its status
(`in_progress`, `success`, `fail`) through a `ConfigContext`. Results are
aggregated into a task
tree with per-handler timings and logs.
2. **Centralized registration** — `ConfigRegistry` is the single source of
truth for all config files,
their filename records, trigger records, and reload handlers.
3. **Inline YAML injection** — Handlers that opt in
(`ConfigSource::FileAndRpc`) can receive YAML
content directly through the RPC, without writing to disk. This is
runtime-only — the content
lives in memory and is lost on restart.
4. **Coordinated reload sessions** — `ReloadCoordinator` manages the
lifecycle of each reload:
token generation, concurrency control (`--force` to override), timeout
detection, and history.
5. **CLI observability** — `traffic_ctl config reload -m` shows a live
progress bar.
`traffic_ctl config status` provides a full post-mortem with task tree,
durations, and failure
details.
### Basic Design
```
┌─────────────┐ JSONRPC ┌────────────────┐
│ traffic_ctl │────────────►│ RPC Handler │
│ config reload│ │ reload_config │
└─────────────┘ └───────┬────────┘
│
┌─────────────────────────┼─────────────────────────┐
│ ▼ │
│ ┌──────────────────┐ │
│ │ ReloadCoordinator │ │
│ │ - prepare_reload │ │
│ │ - token tracking │ │
│ │ - history │ │
│ └────────┬─────────┘ │
│ │ │
│ ┌───────────┴──────────┐ │
│ ▼ ▼ │
│ ┌──────────────┐ ┌──────────────────┐ │
│ │ File-based │ │ Inline mode │ │
│ │ FileManager │ │ set_passed_config │ │
│ │ rereadConfig │ │ schedule_reload │ │
│ └──────┬───────┘ └────────┬─────────┘ │
│ │ │ │
│ └───────────┬───────────┘ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ ConfigRegistry │ │
│ │ execute_reload │ │
│ └────────┬─────────┘ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ ConfigContext │ │
│ │ - in_progress() │ │
│ │ - log() │ │
│ │ - complete() │ │
│ │ - fail() │ │
│ │ - supplied_yaml()│ │
│ └────────┬─────────┘ │
│ ▼ │
│ ┌──────────────────┐ │
│ │ Handler │ │
│ │ (IpAllow, SNI, │ │
│ │ remap, etc.) │ │
│ └──────────────────┘ │
└───────────────────────────────────────────────────┘
```
**Key components:**
| Component | Role |
|---|---|
| `ConfigRegistry` | Singleton registry mapping config keys to handlers,
filenames, trigger records. Self-registers with `FileManager`. |
| `ReloadCoordinator` | Manages reload sessions: token generation,
concurrency, timeout detection, history. |
| `ConfigReloadTask` | Tracks a single reload operation as a tree of
sub-tasks with status, timings, and logs. |
| `ConfigContext` | Lightweight context passed to handlers. Provides
`in_progress()`, `complete()`, `fail()`, `log()`, `supplied_yaml()`, and
`add_dependent_ctx()`. Safe no-op at startup (no active reload task). |
| `ConfigReloadProgress` | Periodic checker that detects stuck tasks and
marks them as `TIMEOUT`. |
**Stuck reload checker:**
`ConfigReloadProgress` is a periodic continuation scheduled on `ET_TASK`. It
monitors active reload
tasks and marks any that exceed the configured timeout as `TIMEOUT`. This
acts as a safety net for
handlers that fail to call `ctx.complete()` or `ctx.fail()` — for example,
if a handler crashes,
deadlocks, or its deferred thread never executes. The checker reads
`proxy.config.admin.reload.timeout`
dynamically at each interval, so the timeout can be adjusted at runtime
without a restart. This is
a simple record read (`RecGetRecordString`), not an expensive operation.
Setting the
timeout to `"0"` disables it (tasks will run indefinitely until completion).
The checker is not a global poller — a new instance is created per-reload
and self-terminates once
the task reaches a terminal state. No idle polling when no reload is in
progress.
### How Handlers Work
**Before — scattered registration (ip_allow example):**
Registration was split across multiple files with no centralized tracking:
```c++
// 1. AddConfigFilesHere.cc — register with FileManager for mtime detection
registerFile("proxy.config.cache.ip_allow.filename", ts::filename::IP_ALLOW,
NOT_REQUIRED);
registerFile("proxy.config.cache.ip_categories.filename",
ts::filename::IP_CATEGORIES, NOT_REQUIRED);
// 2. IPAllow.cc — attach record callback (fire-and-forget, no status
tracking)
ConfigUpdateHandler<IpAllow> *ipAllowUpdate = new
ConfigUpdateHandler<IpAllow>("ip_allow");
ipAllowUpdate->attach("proxy.config.cache.ip_allow.filename");
// 3. IpAllow::reconfigure() — no context, no status, no tracing
void IpAllow::reconfigure() {
// ... load config from disk, no way to report success/failure ...
}
```
**Now — each module self-registers with full tracing:**
Each module registers itself directly with `ConfigRegistry`. No more
separate `AddConfigFilesHere.cc`
entry — the registry handles `FileManager` registration, record callbacks,
and status tracking
automatically:
```c++
// IPAllow.cc — one call replaces all three steps above
config::ConfigRegistry::Get_Instance().register_config(
"ip_allow", // registry key
ts::filename::IP_ALLOW, // default filename
"proxy.config.cache.ip_allow.filename", // record holding
filename
[](ConfigContext ctx) { IpAllow::reconfigure(ctx); }, // handler with
context
config::ConfigSource::FileOnly, // content source
{"proxy.config.cache.ip_allow.filename"}); // trigger records
// Auxiliary file — attach ip_categories as a dependency (changes trigger
ip_allow reload)
config::ConfigRegistry::Get_Instance().add_file_dependency(
"ip_allow", // config key to attach to
"proxy.config.cache.ip_categories.filename", // record holding the
filename
ts::filename::IP_CATEGORIES, // default filename
false); // not required
```
Additional triggers can be attached from any module at any time:
```c++
// From another module — attach an extra record trigger
config::ConfigRegistry::Get_Instance().attach("ip_allow",
"proxy.config.some.extra.record");
```
Composite configs can declare file dependencies and dependency keys. For
example, `SSLClientCoordinator`
owns `sni.yaml` and `ssl_multicert.config` as children:
```c++
// Main registration (no file of its own — it's a pure coordinator)
// From SSLClientCoordinator::startup()
config::ConfigRegistry::Get_Instance().register_record_config(
"ssl_client_coordinator", //
registry key
[](ConfigContext ctx) { SSLClientCoordinator::reconfigure(ctx); }, //
reload handler
{"proxy.config.ssl.client.cert.path", //
trigger records
"proxy.config.ssl.client.cert.filename",
"proxy.config.ssl.client.private_key.path",
"proxy.config.ssl.client.private_key.filename",
"proxy.config.ssl.keylog_file",
"proxy.config.ssl.server.cert.path",
"proxy.config.ssl.server.private_key.path",
"proxy.config.ssl.server.cert_chain.filename",
"proxy.config.ssl.server.session_ticket.enable"});
// Track sni.yaml — FileManager watches for mtime changes, record wired to
trigger reload
config::ConfigRegistry::Get_Instance().add_file_and_node_dependency(
"ssl_client_coordinator", "sni",
"proxy.config.ssl.servername.filename", ts::filename::SNI, false);
// Track ssl_multicert.config — same pattern
config::ConfigRegistry::Get_Instance().add_file_and_node_dependency(
"ssl_client_coordinator", "ssl_multicert",
"proxy.config.ssl.server.multicert.filename",
ts::filename::SSL_MULTICERT, false);
```
**Handler interaction with `ConfigContext`:**
Each config module implements a C++ reload handler — the callback passed to
`register_config()`.
The handler reports progress through the `ConfigContext`:
```c++
void IpAllow::reconfigure(ConfigContext ctx) {
ctx.in_progress();
// ... load config from disk ...
ctx.complete("Loaded successfully");
// or on error:
// ctx.fail(errata, "Failed to load");
}
```
When a reload fires, the handler receives a `ConfigContext`:
- **File source** — `ctx.supplied_yaml()` is undefined; the handler reads
from its registered file on disk.
- **RPC source** — `ctx.supplied_yaml()` contains the YAML node passed via
`--data` / RPC.
The content is **runtime-only** and is never written to disk.
Handlers report progress:
```c++
ctx.in_progress("Parsing ip_allow.yaml");
ctx.log("Loaded 42 rules");
ctx.complete("Finished loading");
// or on error:
ctx.fail(errata, "Failed to load ip_allow.yaml");
```
**Supplied YAML — inline content via `-d` / RPC:**
> **Note:** The infrastructure for RPC-supplied YAML is fully implemented,
but no handler currently
> opts into `ConfigSource::FileAndRpc`. File-based handlers use
`ConfigSource::FileOnly`, and
> record-only handlers use `ConfigSource::RecordOnly` (implicitly via
`register_record_config()`).
When a handler opts into `ConfigSource::FileAndRpc`, it can receive YAML
content directly instead
of reading from disk. The handler checks `ctx.supplied_yaml()` to determine
the source:
```c++
void IpAllow::reconfigure(ConfigContext ctx) {
ctx.in_progress();
YAML::Node root;
if (auto yaml = ctx.supplied_yaml()) {
// Inline mode: YAML supplied via -d flag or JSONRPC.
// Not persisted to disk — runtime only.
root = yaml;
} else {
// File mode: read from the registered config file on disk.
root = YAML::LoadFile(config_filename);
}
// ... parse and apply config ...
ctx.complete("Loaded successfully");
}
```
For composite configs (e.g., `SSLClientCoordinator`), handlers create child
contexts to track
each sub-config independently. From `SSLClientCoordinator::reconfigure()`:
```c++
SSLConfig::reconfigure(reconf_ctx.add_dependent_ctx("SSLConfig"));
SNIConfig::reconfigure(reconf_ctx.add_dependent_ctx("SNIConfig"));
SSLCertificateConfig::reconfigure(reconf_ctx.add_dependent_ctx("SSLCertificateConfig"));
reconf_ctx.complete("SSL configs reloaded");
```
The parent task automatically aggregates status from its children. In
`traffic_ctl config status`,
this renders as a tree:
```
✔ ssl_client_coordinator ················· 35ms
├─ ✔ SSLConfig ·························· 10ms
├─ ✔ SNIConfig ·························· 12ms
└─ ✔ SSLCertificateConfig ·············· 13ms
```
### Design Challenges
#### 1. Handlers must reach a terminal state — or the task hangs
The entire tracing model relies on handlers calling `ctx.complete()` or
`ctx.fail()` before
returning. If a handler returns without reaching a terminal state, the task
stays `IN_PROGRESS`
indefinitely until the timeout checker marks it as `TIMEOUT`.
After `execute_reload()` calls the handler, it checks `ctx.is_terminal()`
and emits a warning
if the handler left the task in a non-terminal state:
```c++
entry_copy.handler(ctx);
if (!ctx.is_terminal()) {
Warning("Config '%s' handler returned without reaching a terminal state.
"
"If the handler deferred work to another thread, ensure
ctx.complete() or ctx.fail() "
"is called when processing finishes; otherwise the task will
remain in progress "
"until the timeout checker marks it as TIMEOUT.",
entry_copy.key.c_str());
}
```
**The safety net:** `ConfigReloadProgress` runs periodically on `ET_TASK`
and marks stuck tasks as
`TIMEOUT` after the configured duration
(`proxy.config.admin.reload.timeout`, default: `1h`).
#### 2. Parent status aggregation from sub-tasks
Parent tasks do **not** track their own status directly — they derive it
from their children.
When a child calls `complete()` or `fail()`, it notifies its parent, which
re-evaluates:
- **Any child failed or timed out** → parent is `FAIL`
- **Any child still in progress** → parent stays `IN_PROGRESS`
- **All children succeeded** → parent is `SUCCESS`
This aggregation is recursive: a sub-task can have its own children (e.g.,
`ssl_client_coordinator` → `sni` + `ssl_multicert`), and status bubbles up
through the tree.
One subtle issue: if a handler creates child contexts but **forgets to call
`complete()` or
`fail()`** on one of them, that child stays `CREATED` and the parent never
reaches `SUCCESS`.
It is the handler developer's responsibility to ensure every `ConfigContext`
(and its children)
reaches a terminal state (`complete()` or `fail()`). The timeout checker is
the ultimate safety
net for cases where this is not properly handled.
#### 3. Startup vs. reload — same handler, different context
Handlers are called both at startup (initial config load) and during runtime
reloads. At startup,
there is no active `ReloadCoordinator` task, so all `ConfigContext`
operations (`in_progress()`,
`complete()`, `fail()`, `log()`) are **safe no-ops** — they check
`_task.lock()` and return
immediately if the weak pointer is expired or empty.
This avoids having two separate code paths for startup vs. reload. The
handler logic is identical
in both cases:
```c++
void IpAllow::reconfigure(ConfigContext ctx) {
ctx.in_progress(); // no-op at startup, tracks progress during reload
// ... load config ...
ctx.complete(); // no-op at startup, marks task as SUCCESS during
reload
}
```
#### 4. Known issue: `ssl_client_coordinator` may appear twice in reload
status
`ssl_client_coordinator` registers multiple trigger records and file
dependencies, each wiring an
independent `on_record_change` callback with no deduplication. When several
of these fire during
the same reload (e.g., both `sni.yaml` and `ssl_multicert.config` changed),
the handler executes
more than once, producing duplicate entries in the reload status output.
This is a pre-existing issue present on `master` — see
[#11724](https://github.com/apache/trafficserver/issues/11724).
#### 5. Plugin support
Plugins are **not** supported by `ConfigRegistry` in this PR. The legacy
reload notification
mechanism (`TSMgmtUpdateRegister`) still works — plugins registered through
it will continue
to be invoked via `FileManager::invokeConfigPluginCallbacks()` during every
reload cycle.
A dedicated plugin API to let plugins register their own config handlers and
participate in
the reload framework will be addressed in a separate PR.
---
### Configs Migrated to `ConfigRegistry`
| Config | Key | File |
|---|---|---|
| IP Allow | `ip_allow` | `ip_allow.yaml` |
| IP Categories | (dependency of `ip_allow`) | `ip_categories.yaml` |
| Cache Control | `cache_control` | `cache.config` |
| Cache Hosting | `cache_hosting` | `hosting.config` |
| Parent Selection | `parent_proxy` | `parent.config` |
| Split DNS | `split_dns` | `splitdns.config` |
| Remap | `remap` | `remap.config` ¹ |
| Logging | `logging` | `logging.yaml` |
| SSL/TLS (coordinator) | `ssl_client_coordinator` | — |
| SNI | (dependency of `ssl_client_coordinator`) | `sni.yaml` |
| SSL Multicert | (dependency of `ssl_client_coordinator`) |
`ssl_multicert.config` |
| SSL Ticket Key | `ssl_ticket_key` | (record-only, no file) |
¹ Remap migration will be refactored after
[#12813](https://github.com/apache/trafficserver/pull/12813) (remap.yaml)
and [#12669](https://github.com/apache/trafficserver/pull/12669) (virtual
hosts) land.
### New Configuration Records
```yaml
records:
admin:
reload:
# Maximum time a reload task can run before being marked as TIMEOUT.
# Supports duration strings: "30s", "5min", "1h". Set to "0" to
disable.
# Default: 1h. Updateable at runtime (RECU_DYNAMIC).
timeout: 1h
# How often the progress checker polls for stuck tasks (minimum: 1s).
# Supports duration strings: "1s", "5s", "30s".
# Default: 2s. Updateable at runtime (RECU_DYNAMIC).
check_interval: 2s
```
---
### TODO
- [ ] **Clean up** - Run some clean up on this code.
- [ ] **Documentation** — Add user-facing docs for the new `traffic_ctl
config reload` / `config status` commands and the JSONRPC APIs.
- [ ] **Improve error reporting** — All config loaders are migrated to
`ConfigRegistry`. Remaining work: fully log detailed errors via `ctx.log()`,
`ctx.fail()`, etc.
- [ ] **Enable inline YAML for more handlers** — Currently file-based
handlers use `ConfigSource::FileOnly` and record-only handlers use
`ConfigSource::RecordOnly`. Migrate file-based handlers to `FileAndRpc` so they
can read YAML directly from the RPC (via `ctx.supplied_yaml()`).
- [ ] **Remove legacy reload infrastructure** — Most config loaders are
migrated. Remove `ConfigUpdateHandler`/`ConfigUpdateContinuation` and the
remaining `registerFile()` calls in `AddConfigFilesHere.cc`.
- [ ] **Consolidate `AddConfigFilesHere.cc` into `ConfigRegistry`** —
Remaining static files (`storage.config`, `volume.config`, `plugin.config`,
etc.) can be registered in `ConfigRegistry` as inventory-only entries (no
handler, no reload) to fully retire `AddConfigFilesHere.cc`.
- [ ] **Autest reload extension** — Implement an autest extension that
checks reload success/failure via the JSONRPC status API (`traffic_ctl config
status -t <token>`) instead of grepping log files.
- [ ] **Trace record-triggered reloads** — Record-based reloads (via
`trigger_records` / `RecRegisterConfigUpdateCb`) are not currently tracked.
Create a main task with a synthetic token so they appear in `traffic_ctl config
status`.
- [ ] **Expose ConfigRegistry to plugins** — Add a plugin API so plugins can
register their own config handlers and participate in the reload framework.
This will be in a separate PR.
- [ ] **Additional tests** — Expand autest coverage.
---
### Dependencies and Related Issues
Fixes [#12324 — Improving `traffic_ctl config
reload`](https://github.com/apache/trafficserver/issues/12324).
This PR will likely land after:
- [#12813](https://github.com/apache/trafficserver/pull/12813)
- [#12669](https://github.com/apache/trafficserver/pull/12669)
There should be no major conflicts with those PRs. Conversation and
coordination needs to be done before merging.
### AI
Claude helped with majority of tests, output formatting, and some
refactoring. Also this PR description was mostly made by claude.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]