Copilot commented on code in PR #12961:
URL: https://github.com/apache/trafficserver/pull/12961#discussion_r2932611655
##########
src/iocore/net/ProxyProtocol.cc:
##########
@@ -237,7 +238,7 @@ proxy_protocol_v2_parse(ProxyProtocol *pp_info, const
swoc::TextView &msg)
uint16_t tlv_len = 0;
if (msg.size() < total_len) {
- Dbg(dbg_ctl_proxyprotocol_v2, "The amount of available data is smaller
than the expected size");
+ Error("The size of PP header received (%zu) is smaller than the expected
size (%zu)", msg.size(), total_len);
Review Comment:
`proxy_protocol_v2_parse()` returns 0 when the buffer doesn't yet contain
the full header (`msg.size() < total_len`). Logging this as `Error()` will spam
`error.log` during normal partial-read situations (especially on new
connections) and can be triggered by malformed/fragmented input. This should
remain debug-level (e.g., `Dbg(dbg_ctl_proxyprotocol_v2, ...)`) or be logged
only once when you’ve decided to abort the connection.
##########
src/iocore/net/NetVConnection.cc:
##########
@@ -53,9 +54,17 @@ DbgCtl dbg_ctl_ssl{"ssl"};
bool
NetVConnection::has_proxy_protocol(IOBufferReader *reader)
{
- char buf[PPv1_CONNECTION_HEADER_LEN_MAX + 1];
swoc::TextView tv;
- tv.assign(buf, reader->memcpy(buf, sizeof(buf), 0));
+
+ char preface[PPv2_CONNECTION_HEADER_LEN];
+ tv.assign(preface, reader->memcpy(preface, sizeof(preface), 0));
+ if (!proxy_protocol_detect(tv)) {
+ return false;
+ }
+
+ int bufsize =
RecGetRecordInt("proxy.config.proxy_protocol.max_header_size").value_or(PPv1_CONNECTION_HEADER_LEN_MAX
+ 1);
+ char buf[bufsize];
+ tv.assign(buf, reader->memcpy(buf, bufsize, 0));
Review Comment:
`bufsize` is read from a config record and then used for `char
buf[bufsize];`. Variable-length arrays are not valid C++ (this relies on a
compiler extension) and also risks large/negative stack allocations if the
record is misconfigured. Prefer a heap-backed buffer (e.g.,
`std::vector<char>`/`std::unique_ptr<char[]>`) and clamp/cast the record value
to a safe `size_t` range before allocating/copying.
##########
src/iocore/net/NetVConnection.cc:
##########
@@ -53,9 +54,17 @@ DbgCtl dbg_ctl_ssl{"ssl"};
bool
NetVConnection::has_proxy_protocol(IOBufferReader *reader)
{
- char buf[PPv1_CONNECTION_HEADER_LEN_MAX + 1];
swoc::TextView tv;
- tv.assign(buf, reader->memcpy(buf, sizeof(buf), 0));
+
+ char preface[PPv2_CONNECTION_HEADER_LEN];
+ tv.assign(preface, reader->memcpy(preface, sizeof(preface), 0));
+ if (!proxy_protocol_detect(tv)) {
+ return false;
+ }
+
+ int bufsize =
RecGetRecordInt("proxy.config.proxy_protocol.max_header_size").value_or(PPv1_CONNECTION_HEADER_LEN_MAX
+ 1);
Review Comment:
Calling `RecGetRecordInt(... )` from `has_proxy_protocol()` does a locked
records lookup on every new connection. Since this runs on the accept / probing
path, it would be better to establish/link this config once (e.g.,
`RecEstablishStaticConfigInt32`) and read the cached value here.
##########
src/records/RecordsConfig.cc:
##########
@@ -1550,7 +1550,14 @@ static constexpr RecordElement RecordsConfig[] =
//# Thread watchdog
//#
//###########
- {RECT_CONFIG, "proxy.config.exec_thread.watchdog.timeout_ms", RECD_INT, "0",
RECU_RESTART_TS, RR_NULL, RECC_INT, "[0-10000]", RECA_NULL}
+ {RECT_CONFIG, "proxy.config.exec_thread.watchdog.timeout_ms", RECD_INT, "0",
RECU_RESTART_TS, RR_NULL, RECC_INT, "[0-10000]", RECA_NULL},
+
+ //###########
+ //#
+ //# PROXY protocol
+ //#
+ //###########
+ {RECT_CONFIG, "proxy.config.proxy_protocol.max_header_size", RECD_INT,
"109", RECU_RESTART_TS, RR_NULL, RECC_INT, "[109-65535]", RECA_NULL},
Review Comment:
This introduces a new PROXY protocol record under
`proxy.config.proxy_protocol.*`, but existing PROXY protocol knobs are under
`proxy.config.http.proxy_protocol_*` (see the nearby
`proxy.config.http.proxy_protocol_allowlist` /
`proxy.config.http.proxy_protocol_out`). To keep the configuration namespace
consistent and easier to discover, consider placing this new record under the
same `proxy.config.http.proxy_protocol.*` prefix (and update the lookup + docs
accordingly) before this ships.
##########
src/iocore/net/NetVConnection.cc:
##########
@@ -53,9 +54,17 @@ DbgCtl dbg_ctl_ssl{"ssl"};
bool
NetVConnection::has_proxy_protocol(IOBufferReader *reader)
{
- char buf[PPv1_CONNECTION_HEADER_LEN_MAX + 1];
swoc::TextView tv;
- tv.assign(buf, reader->memcpy(buf, sizeof(buf), 0));
+
+ char preface[PPv2_CONNECTION_HEADER_LEN];
+ tv.assign(preface, reader->memcpy(preface, sizeof(preface), 0));
+ if (!proxy_protocol_detect(tv)) {
+ return false;
+ }
+
+ int bufsize =
RecGetRecordInt("proxy.config.proxy_protocol.max_header_size").value_or(PPv1_CONNECTION_HEADER_LEN_MAX
+ 1);
+ char buf[bufsize];
+ tv.assign(buf, reader->memcpy(buf, bufsize, 0));
Review Comment:
New behavior is gated by `proxy.config.proxy_protocol.max_header_size`, but
there doesn’t appear to be any existing test coverage that exercises this knob
(no tests reference the record) or validates parsing of a PPv2 header larger
than the v1-sized default. Consider extending an existing PROXY protocol gold
test to send a PPv2 header with TLVs that exceeds 109 bytes and verifies it is
accepted when the record is increased (and rejected/closed when it isn’t).
##########
doc/admin-guide/files/records.yaml.en.rst:
##########
@@ -5234,6 +5234,18 @@ UDP Configuration
Enables (``1``) or disables (``0``) UDP GRO. When enabled, |TS| will try to
use it
when reading the UDP socket.
+
+PROXY protocol Configuration
+=============================
+
+.. ts:cv:: CONFIG proxy.config.proxy_protocol.max_header_size INT 109
+
+ Sets the maximum size of PROXY protocol header to receive.
+ The default size is enough for PROXY protocol version 1. The size needs to
be increased
+ if the version 2 is used with many TLV fields. Although you can set a
numbere up to 65535,
Review Comment:
Typo in documentation: "numbere" should be "number".
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]