nickva commented on code in PR #5602: URL: https://github.com/apache/couchdb/pull/5602#discussion_r2316517635
########## src/couch_srt/README.md: ########## @@ -0,0 +1,1025 @@ +# couch_srt: Couch Stats Resource Tracker aka CSRT + +The `couch_srt` app introduces the Couch Stats Resource Tracker, aka CSRT for +short. CSRT is a real time stats tracking system that tracks the quantity of +resources induced at the process level in a live queryable manner, while also +generating process lifetime reports containing statistics on the total resource +load of a request, as a function of CouchDB operations like dbs/docs opened, +view and changes rows read, changes returned vs processed, Javascript filter +usage, request duration, and more. This system is a paradigm shift in CouchDB +visibility and introspection, allowing for expressive real time querying +capabilities to introspect, understand, and aggregate CouchDB internal resource +usage, as well as powerful filtering facilities for conditionally generating +reports on "heavy usage" requests or "long/slow" requests. CSRT also extends +`recon:proc_window` with `couch_srt:proc_window` allowing for the same style of +battle hardened introspection with Recon's excellent `proc_window`, but with the +sample window over any of the CSRT tracked CouchDB stats! + +CSRT does this by piggy-backing off of the existing metrics tracked by way of +`couch_stats:increment_counter` at the time when the local process induces those +metrics inc calls, and then CSRT updates an ets entry containing the context +information for the local process, such that global aggregate queries can be +performed against the ets table as well as the generation of the process +resource usage reports at the conclusions of the process's lifecyle.The ability +to do aggregate querying in realtime in addition to the process lifecycle +reports for post facto analysis over time, is a cornerstone of CSRT that is the +result of a series of iterations until a robust and scalable aproach was built. + +The real time querying is achieved by way of a global ets table with +`read_concurrency`, `write_concurrency`, and `decentralized_counters` enabled. +Great care was taken to ensure that _zero_ concurrent writes to the same key +occure in this model, and this entire system is predicated on the fact that +incremental updates to `ets:update_counters` provides *really* fast and +efficient updates in an atomic and isolated fashion when coupled with +decentralized counters and write concurrency. Each process that calls +`couch_stats:increment_counter` tracks their local context in CSRT as well, with +zero concurrent writes from any other processes. Outside of the context setup +and teardown logic, _only_ operations to `ets:update_counter` are performed, one +per process invocation of `couch_stats:increment_counter`, and one for +coordinators to update worker deltas in a single batch, resulting in a 1:1 ratio +of ets calls to real time stats updates for the primary workloads. + +The primary achievement of CSRT is the core framework iself for concurrent +process local stats tracking and real time RPC delta accumulation in a scalable +manner that allows for real time aggregate querying and process lifecycle +reports. This took several versions to find a scalable and robust approach that +induced minimal impact on maximum system throughput. Now that the framework is +in place, it can be extended to track any further desired process local uses of +`couch_stats:increment_counter`. That said, the currently selected set of stats +to track was heavily influenced by the challenges in reotractively understanding +the quantity of resources induced by a query like `/db/_changes?since=$SEQ`, or +similarly, `/db/_find`. + +CSRT started as an extension of the Mango execution stats logic to `_changes` +feeds to get proper visibility into quantity of docs read and filtered per +changes request, but then the focus inverted with the realization that we should +instead use the existing stats tracking mechanisms that have already been deemed +critical information to track, which then also allows for the real time tracking +and aggregate query capabilities. The Mango execution stats can be ported into +CSRT itself and just become one subset of the stats tracked as a whole, and +similarly, any additional desired stats tracking can be easily added and will +be picked up in the RPC deltas and process lifetime reports. + +## A Simple Example + +Given a databse `foo` with 11k documents containg a `doc.value` field that is an +integer value which can be filtered in a design doc by way of even and odd. If +we instantiate a series of while loops in parallel making requests of the form: + +> GET /foo/_changes?filter=bar/even&include_docs=true + +We can generate a good chunk of load on a local laptop dev setup, resulting in +requests that take a few seconds to load through the changes feed, fetch all 11k +docs, and then funnel them through the Javascript engine to filter for even +valued docs; this allows us time to query these heavier requests live and see +them in progress with the real time stats tracking and querying capabilities of +CSRT. + +For example, let's use `couch_srt:proc_window/3` as one would do with +`recon:proc_window/3` to get an idea of the heavy active processes on the +system: + +``` +(node1@127.0.0.1)2> rp([{PR, couch_srt:to_json(couch_srt:get_resource(PR))} || {PR, _, _} <- couch_srt:proc_window(ioq_calls, 3, 1000)]). +[{{<0.5090.0>,#Ref<0.2277656623.605290499.37969>}, + #{changes_returned => 3962,db_open => 10,dbname => <<"foo">>, + docs_read => 7917,docs_written => 0,get_kp_node => 54, + get_kv_node => 1241,ioq_calls => 15834,js_filter => 7917, + js_filtered_docs => 7917,nonce => <<"cc5a814ceb">>, + pid_ref => + <<"<0.5090.0>:#Ref<0.2277656623.605290499.37969>">>, + rows_read => 7917, + started_at => <<"2025-07-21T17:25:08.784z">>, + type => + <<"coordinator-{chttpd_db:handle_changes_req}:GET:/foo/_changes">>, + updated_at => <<"2025-07-21T17:25:13.051z">>, + username => <<"adm">>}}, + {{<0.5087.0>,#Ref<0.2277656623.606601217.92191>}, + #{changes_returned => 4310,db_open => 10,dbname => <<"foo">>, + docs_read => 8624,docs_written => 0,get_kp_node => 58, + get_kv_node => 1358,ioq_calls => 17248,js_filter => 8624, + js_filtered_docs => 8624,nonce => <<"0e625c723a">>, + pid_ref => + <<"<0.5087.0>:#Ref<0.2277656623.606601217.92191>">>, + rows_read => 8624, + started_at => <<"2025-07-21T17:25:08.424z">>, + type => + <<"coordinator-{chttpd_db:handle_changes_req}:GET:/foo/_changes">>, + updated_at => <<"2025-07-21T17:25:13.051z">>, + username => <<"adm">>}}, + {{<0.5086.0>,#Ref<0.2277656623.605290499.27728>}, + #{changes_returned => 4285,db_open => 10,dbname => <<"foo">>, + docs_read => 8569,docs_written => 0,get_kp_node => 57, + get_kv_node => 1349,ioq_calls => 17138,js_filter => 8569, + js_filtered_docs => 8569,nonce => <<"962cda1645">>, + pid_ref => + <<"<0.5086.0>:#Ref<0.2277656623.605290499.27728>">>, + rows_read => 8569, + started_at => <<"2025-07-21T17:25:08.406z">>, + type => + <<"coordinator-{chttpd_db:handle_changes_req}:GET:/foo/_changes">>, + updated_at => <<"2025-07-21T17:25:13.051z">>, + username => <<"adm">>}}] +ok +``` + +This shows us the top 3 most active processes (being tracked in CSRT) over the +next 1000 milliseconds, sorted by number of `ioq_calls` induced! All of three +of these processes are incurring heavy usage, reading many thousands of docs +with 15k+ IOQ calls and heavy JS filter usage, exactly the types of requests +you want to be alerted to. CSRT's proc window logic is built on top of Recon's, +which doesn't return the process info itself, so you'll need to fetch the +process status with `couch_srt:get_resource/1` and then pretty print it with +`couch_srt:to_json/1`. + +The output above is a real time snapshot of the live running system and shows +processes actively inducing additional resource usage, so these CSRT context +values are just a time snapshot of where that process was at, as of the +`updated_at` timestamp. We can reference the nonce value to search through the +report logs for a final report, assuming the given context ended up using +sufficient resources to trigger a logger matcher lifetime report. The above +changes requests were induced specifically to induce reports as well, so +unsurprisingly we have reports for all three. + +However, I want to first show the existing visibility into these changes +requests exposed by the raw HTTP logs to highlight the impact of the CSRT +reports and new visibility into request workloads exposed. + +First, let's look at the existing HTTP logs for those 3 requests: + +``` +(chewbranca)-(jobs:1)-(~/src/couchdb_csrt_v3) +(! 9872)-> grep 'cc5a814ceb\|0e625c723a\|962cda1645' ./dev/logs/node1.log | grep -v '^\[report]' +[notice] 2025-07-21T17:25:14.520641Z node1@127.0.0.1 <0.5087.0> 0e625c723a localhost:15984 127.0.0.1 adm GET /foo/_changes?filter=bar/even&asdf=fdsa&include_docs=true 200 ok 6096 +[notice] 2025-07-21T17:25:14.521417Z node1@127.0.0.1 <0.5086.0> 962cda1645 localhost:15984 127.0.0.1 adm GET /foo/_changes?filter=bar/even&asdf=fdsa&include_docs=true 200 ok 6115 +[notice] 2025-07-21T17:25:14.844317Z node1@127.0.0.1 <0.5090.0> cc5a814ceb localhost:15984 127.0.0.1 adm GET /foo/_changes?filter=bar/even&asdf=fdsa&include_docs=true 200 ok 6059 +``` + +So we see the requests were made, and we can see it's doing `include_docs=true` +as well as using a customer filter, both obvious indications that this is a +potentially heavier request, however, we don't know if database foo had a +thousand docs or a billion docs, whether those docs were small or large, nor any +indication of the computational complexity of the reference filter function. +This makes it challenging to retroactively correlate heavy resource usage at a +hardware level with the underlying requests that induced those workloads, +especially if the heavy requests are an inconspicuous subset of the full +database workload. + +CSRT resolves this by providing a real time querying system to find the active +heavy proceses, live, as well as a process lifecyle reporting engine providing +detailed analysis of the workloads induced by the request. + +Let's assume we had the default IOQ logger matcher enabled, with the default +configuration of logging any requests inducing more than 10k IOQ calls, which +would catch all three of our requests above, even though they're all still +going. As a result, we generate process lifecylce reports for all three of those +requests, as we can see: + +``` +(chewbranca)-(jobs:1)-(~/src/couchdb_csrt_v3) +(! 9873)-> grep 'cc5a814ceb\|0e625c723a\|962cda1645' ./dev/logs/node1.log | grep '^\[report]' +[report] 2025-07-21T17:25:14.520787Z node1@127.0.0.1 <0.5174.0> -------- [csrt-pid-usage-lifetime changes_returned=5500 db_open=10 dbname="foo" docs_read=11001 get_kp_node=72 get_kv_node=1754 ioq_calls=22002 js_filter=11001 js_filtered_docs=11001 nonce="0e625c723a" pid_ref="<0.5087.0>:#Ref<0.2277656623.606601217.92191>" rows_read=11001 started_at="2025-07-21T17:25:08.424z" type="coordinator-{chttpd_db:handle_changes_req}:GET:/foo/_changes" updated_at="2025-07-21T17:25:14.520z" username="adm"] +[report] 2025-07-21T17:25:14.521578Z node1@127.0.0.1 <0.5155.0> -------- [csrt-pid-usage-lifetime changes_returned=5500 db_open=10 dbname="foo" docs_read=11001 get_kp_node=72 get_kv_node=1754 ioq_calls=22002 js_filter=11001 js_filtered_docs=11001 nonce="962cda1645" pid_ref="<0.5086.0>:#Ref<0.2277656623.605290499.27728>" rows_read=11001 started_at="2025-07-21T17:25:08.406z" type="coordinator-{chttpd_db:handle_changes_req}:GET:/foo/_changes" updated_at="2025-07-21T17:25:14.521z" username="adm"] +[report] 2025-07-21T17:25:14.844436Z node1@127.0.0.1 <0.5213.0> -------- [csrt-pid-usage-lifetime changes_returned=5500 db_open=10 dbname="foo" docs_read=11001 get_kp_node=72 get_kv_node=1754 ioq_calls=22002 js_filter=11001 js_filtered_docs=11001 nonce="cc5a814ceb" pid_ref="<0.5090.0>:#Ref<0.2277656623.605290499.37969>" rows_read=11001 started_at="2025-07-21T17:25:08.784z" type="coordinator-{chttpd_db:handle_changes_req}:GET:/foo/_changes" updated_at="2025-07-21T17:25:14.844z" username="adm"] +``` + +We find the process lifecycle reports for the requests with the three grep'ed on +nonces, and we can see they all read the 11k core documents, plus the one design +document, JS filtered all 11,001 docs, and then only returned the 5500 doc's +containing an even `doc.value` field. + +This also shows the discrepancy between the quantity of induced resource usage +to actually generate a request, relative to the magnitude of the data returned. +All of our `doc.value` fields were positive integers, if we had a filter +function searching for negative `doc.value` results, we would have found none, +resulting in `changes_returned=0`, but we would have still induced the 11,001 +doc loads and Javascript filter calls. + +CSRT is specifically built to automatically find and report these types of +workload discrepancies and in general to help highlight where individual HTTP +requests use drastically more resources than the median workloads. + +See the dedicated proc window documentation section further down for more info. + +## Additional Overview and Examples + +The query and HTTP API's are well documented and tested (h/t @iilyak) and +provide an excellent overview of the interaction patterns and query capabilities +of CSRT. Those can be found at: + +* `couch_srt_query.erl` "Query API functions" + - https://github.com/apache/couchdb/blob/93bc894380056ccca1f77415454e991c4d914249/src/couch_stats/src/couch_srt_query.erl#L319-L674 + - the above highlighted functions are well tested, typespec'ed, and have + auxilary documentation and examples, an excellent resource +* the `couch_srt_query_tests.erl` Eunit tests are an excellent overview of utilizing + the `couch_srt_query:` API from Erlang to find, filter, and aggregate CSRT real + time contexts + - https://github.com/apache/couchdb/blob/93bc894380056ccca1f77415454e991c4d914249/src/couch_stats/test/eunit/couch_srt_query_tests.erl +* similarly, the `couch_srt_httpd_tests.erl` Eunit tests are an excellent overview of + performing the same style `couch_srt_query:` queries, but through the HTTP API + - https://github.com/apache/couchdb/blob/93bc894380056ccca1f77415454e991c4d914249/src/couch_stats/test/eunit/couch_srt_httpd_tests.erl +* Additionally there's the `couch_srt_logger_tests.erl` Eunit tests which demonstrate + the different default logger matchers in action + - https://github.com/apache/couchdb/blob/93bc894380056ccca1f77415454e991c4d914249/src/couch_stats/test/eunit/couch_srt_logger_tests.erl + +# CSRT Config Keys + +## -define(CSRT, "csrt"). + +> config:get("csrt"). + +Primary CSRT config namespace: contains core settings for enabling different +layers of functionality in CSRT, along with global config settings for limiting +data volume generation. + +## -define(CSRT_MATCHERS_ENABLED, "csrt_logger.matchers_enabled"). + +> config:get("csrt_logger.matchers_enabled"). + +Config toggles for enabling specific builtin logger matchers, see the dedicated +section below on `# CSRT Default Matchers`. + +## -define(CSRT_MATCHERS_THRESHOLD, "csrt_logger.matchers_threshold"). + +> config:get("csrt_logger.matchers_threshold"). + +Config settings for defining the primary `Threshold` value of the builtin logger +matchers, see the dedicated section below on `# CSRT Default Matchers`. + +## -define(CSRT_MATCHERS_DBNAMES, "csrt_logger.dbnames_io"). + +> config:get("csrt_logger.matchers_enabled"). + +Config section for setting `$db_name = $threshold` resulting in instantiating a +"dbnames_io" logger matcher for each `$db_name` that will generate a CSRT +lifecycle report for any contexts that that induced more operations on _any_ one +field of `ioq_calls|get_kv_node|get_kp_node|docs_read|rows_read` that is greater +than `$threshold` and is on database `$db_name`. + +This is basically a simple matcher for finding heavy IO requests on a particular +database, in a manner amenable to key/value pair specifications in this .ini +file until a more sophisticated declarative model exists. In particular, it's +not easy to sequentially generate matchspecs by way `ets:fun2ms/1`, and so an +alternative mechanism for either dynamically assembling an `#rctx{}` to match +against or generating the raw matchspecs themselves is warranted. + +# CSRT Code Markers + +## -define(CSRT_ETS, csrt_server). + +This is the reference to the CSRT ets table, it's managed by `csrt_server` so +that's where the name originates from. + +## -define(MATCHERS_KEY, {csrt_logger, all_csrt_matchers}). + +This marker is where the active matchers are written to in `persisten_term` for +concurrently and parallelly and accessing the logger matchers in the CSRT +tracker processes for lifecycle reporting. + +# CSRT Process Dictionary Markers + +## -define(PID_REF, {csrt, pid_ref}). + +This marker is for the core storing the core `PidRef` identifier. The key idea +here is that a lifecycle is a context lifecycle is contained to within the given +`PidRef`, meaning that a `Pid` can instantiate different CSRT lifecycles and +pass those to different workers. + +This is specifically necessary for long running processes that need to handle +many CSRT context lifecycles over the course of that individual process's +lifecycle independent. In practice, this is immediately needed for the actual +coordinator lifecycle tracking, as `chttpd` uses a worker pool of http request +handlers that can be re-used, so we need a way to create a CSRT lifecycle +corresponding to the given request currently being serviced. This is also +intended to be used in other long running processes, like IOQ or `couch_js` pids +such that we can track the specific context inducing the operations on the +`couch_file` pid or indexer or replicator or whatever. + +Worker processes have a more clear cut lifecycle, but either style of process +can be exit'ed in a manner that skips the ability to do cleanup operations, so +additionally there's a dedicated tracker process spawned to monitor the process +that induced the CSRT context such that we can do the dynamic logger matching +directly in these tracker processes and also we can properly cleanup the ets +entries even if the Pid crashes. + +## -define(TRACKER_PID, {csrt, tracker}). + +A handle to the spawned tracker process that does cleanup and logger matching +reports at the end of the process lifecycle. We store a reference to the tracker +pid so that for explicit context destruction, like in `chttpd` workers after a +request has been serviced, we can update stop the tracker and perform the +expected cleanup directly. + +## -define(DELTA_TA, {csrt, delta_ta}). + +This stores our last delta snapshot to track progress since the last incremental +streaming of stats back to the coordinator process. This will be updated after +the next delta is made with the latest value. Eg this stores `T0` so we can do +`T1 = get_resource()` `make_delta(T0, T1)` and then we save `T1` as the new `T0` +for use in our next delta. + +## -define(LAST_UPDATED, {csrt, last_updated}). + +This stores the integer corresponding to the `erlang:monotonic_time()` value of +the most recent `updated_at` value. Basically this lets us utilize a pdict +value to be able to turn `update_at` tracking into an incremental operation that +can be chained in the existing atomic `ets:update_counter` and +`ets:update_element` calls. + +The issue being that our updates are of the form `+2 to ioq_calls for $pid_ref`, +which ets does atomically in a guaranteed `atomic` and `isolated` manner. The +strict use of the atomic operations for tracking these values is why this +system works effeciently at scale. This means that we can increment counters on +all of the stats counter fields in a batch, very quickly, but for tracking +`updated_at` timestamps we'd need to either do an extra ets call to get the last +`updated_at` value, or do an extra ets call to `ets:update_element` to set the +`updated_at` value to `couch_srt_util:tnow()`. The core problem with this is that the +batch inc operation is essentially the only write operation performed after the +initial context setting of dbname/handler/etc; this means that we'd literally +double the number of ets calls induced to track CSRT updates, just for tracking +the `updated_at`. So instead, we rely on the fact that the local process +corresponding to `$pid_ref` is the _only_ process doing updates so we know the +last `updated_at` value will be the last time this process updated the data. So +we track that value in the pdict and then take a delta between `tnow()` and +`updated_at`, and then `updated_at` becomes a value we can sneak into the other +integer counter updates we're already performing! + +# Primary Config Toggles + +# CSRT (?CSRT="csrt") Config Settings + +## config:get(?CSRT, "enable", false). + +Core enablement toggle for CSRT, defaults to false. Enabling this setting +intiates local CSRT stats collection as well as shipping deltas in RPC +responses to accumulate in the coordinator. + +This does _not_ trigger the new RPC spawn metrics, and it does not enable +reporting for any of the rctx types. + +*NOTE*: you *MUST* have all nodes in the cluster running a CSRT aware CouchDB +_before_ you enable it on any node, otherwise the old version nodes won't know +how to handle the new RPC formats including an embedded Delta payload. + +## config:get(?CSRT, "enable_init_p", false). + +Enablement of tracking new metric counters for different `fabric_rpc` operations +spawned by way of `rexi_server:init_p/3`. This is the primary mechanism for +inducing database RPC operations within CouchDB, and these init_p metrics aim to +provide node lever understandings of the workloads being induced by other +coordinator proceses. This is especially relevant for databases on subsets of a +cluster resulting in non-uniform workloads, these metrics are tailored to +provide insight into what work is being spawned on each node in the cluster as a +function of time. + +## config:get(?CSRT, "enable_reporting", false). + +This is the primary toggle for enabling CSRT process lifetime reports containing +detailed information about the quantity of work induced by the given +request/worker/etc. This is the top level toggle for enabling _any_ reporting, +and there also exists `config:get(?CSRT, "enable_rpc_reporting", false).` to +disable the reporting of any individual RPC workers, leaving the coordinator +responsible of generating a report with the accumulated deltas. + +## config:get(?CSRT, "enable_rpc_reporting", false). + +This enables the possibility of RPC workers generating reports. They still need +to hit the configured thresholds to induce a report, but this will generate CSRT +process lifetime reports for individual RPC workers that trigger the configured +logger thresholds. This allows for quantifying per node resource usage when +desired, as otherwise the reports are at the http request level and don't +provide per node stats. + +The key idea here is that having RPC level CSRT process lifetime reporting is +incredibly useful, but can also generate large quantities of data. For example, +a view query on a Q=64 database will stream results from 64 shard replicas, +resulting in at least 64 RPC reports, plus any that might have been generated +from RPC workers that "lost" the race for shard replica. This is very useful, +but a lot of data given the verbose nature of funneling it through the RSyslog +reports, however, the ability to write directly to something like ClickHouse or +another columnar store would be great. + +Until there's an efficient storage mechanism to stream the results to, the +rsyslog entries work great and are very practical, but care must be taken to +not generate too much data for aggregate queries as they generate at least `Qx` +more report than an individual report per http request from the coordinator. +This setting exists as a way to either a) utilize the logger matcher configured +thresholds to allow for _any_ rctx's to be recorded when they induce heavy +operations, either Coordinator or RPC worker; or b) to _only_ log workloads at +the coordinator level. + +NOTE: this setting exists because we lack an expressive enough config +declaration to easily chain the matchspec constructions as `ets:fun2ms/1` is a +special compile time parse transform macro that requires the fully definition to +be specified directly, it cannot be iteractively constructed. That said, you +_can_ register matchers through remsh with more specific and fine grained +pattern matching, and a more expressive system for defining matchers are being +explored. + +## config:get_boolean(?CSRT, "should_truncate_reports", true) + +Enables truncation of the CSRT process lifetime reports to not include any +fields that are zero at the end of process lifetime, eg don't include +`js_filter=0` in the report if the request did not induce Javascript filtering. + +This can be disabled if you really care about consistent fields in the report +logs, but this is a log space saving mechanism, similar to disabling RPC +reporting by default, as its a simple way to reduce overall volume + +## config:get(?CSRT, "randomize_testing", true). + +This is a `make eunit` only feature toggle that will induce randomness into the +cluster's `couch_srt:is_enabled()` state, specifically to utilize the test suite to +exercise edge case scenarios and failures when CSRT is only conditionally +enabled, ensuring that it gracefuly and robustly handles errors without fallout +to the underlying http clients. + +The idea here is to introduce randomness into whether CSRT is enabled across all +the nodes to simulate clusters with heterogeneous CSRT enablement and also to +ensure that CSRT works properly when toggled on/off wihout causing any +unexpected fallout to the client requests. + +This is a config toggle specifically so that the actual CSRT tests can disable +it for making accurate assertions about resource usage traacking, and is not +intended to be used directly. + +## config:get_integer(?CSRT, "query_limit", ?QUERY_LIMIT) + +Limit the quantity of rows that can be loaded in an http query. Review Comment: Very nice, that makes sense -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: notifications-unsubscr...@couchdb.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org