chewbranca commented on code in PR #5602:
URL: https://github.com/apache/couchdb/pull/5602#discussion_r2221096062


##########
src/couch_stats/CSRT.md:
##########
@@ -0,0 +1,893 @@
+# Couch Stats Resource Tracker (CSRT)
+
+CSRT (Couch Stats Resource Tracker) is a real time stats tracking system that
+tracks the quantity of resources induced at the process level in a live
+queryable manner that also generates process lifetime reports containing
+statistics on the total resource load of a request, as a function of things 
like
+dbs/docs opened, view and changes rows read, changes returned vs processed,
+Javascript filter usage, duration, and more. This system is a paradigm shift in
+CouchDB visibility and introspection, allowing for expressive real time 
querying
+capabilities to introspect, understand, and aggregate CouchDB internal resource
+usage, as well as powerful filtering facilities for conditionally generating
+reports on "heavy usage" requests or "long/slow" requests. CSRT also extends
+`recon:proc_window` with `csrt:proc_window` allowing for the same style of
+battle hardened introspection with Recon's excellent `proc_window`, but with 
the
+sample window over any of the CSRT tracked CouchDB stats!
+
+CSRT does this by piggy-backing off of the existing metrics tracked by way of
+`couch_stats:increment_counter` at the time when the local process induces 
those
+metrics inc calls, and then CSRT updates an ets entry containing the context
+information for the local process, such that global aggregate queries can be
+performed against the ets table as well as the generation of the process
+resource usage reports at the conclusions of the process's lifecyle.The ability
+to do aggregate querying in realtime in addition to the process lifecycle
+reports for post facto analysis over time, is a cornerstone of CSRT that is the
+result of a series of iterations until a robust and scalable aproach was built.
+
+The real time querying is achieved by way of a global ets table with
+`read_concurrency`, `write_concurrency`, and `decentralized_counters` enabled.
+Great care was taken to ensure that _zero_ concurrent writes to the same key
+occure in this model, and this entire system is predicated on the fact that
+incremental updates to `ets:update_counters` provides *really* fast and
+efficient updates in an atomic and isolated fashion when coupled with
+decentralized counters and write concurrency. Each process that calls
+`couch_stats:increment_counter` tracks their local context in CSRT as well, 
with
+zero concurrent writes from any other processes. Outside of the context setup
+and teardown logic, _only_ operations to `ets:update_counter` are performed, 
one
+per process invocation of `couch_stats:increment_counter`, and one for
+coordinators to update worker deltas in a single batch, resulting in a 1:1 
ratio
+of ets calls to real time stats updates for the primary workloads.
+
+The primary achievement of CSRT is the core framework iself for concurrent
+process local stats tracking and real time RPC delta accumulation in a scalable
+manner that allows for real time aggregate querying and process lifecycle
+reports. This took several versions to find a scalable and robust approach that
+induced minimal impact on maximum system throughput. Now that the framework is
+in place, it can be extended to track any further desired process local uses of
+`couch_stats:increment_counter`. That said, the currently selected set of stats
+to track was heavily influenced by the challenges in reotractively 
understanding
+the quantity of resources induced by a query like `/db/_changes?since=$SEQ`, or
+similarly, `/db/_find`.
+
+CSRT started as an extension of the Mango execution stats logic to `_changes`
+feeds to get proper visibility into quantity of docs read and filtered per
+changes request, but then the focus inverted with the realization that we 
should
+instead use the existing stats tracking mechanisms that have already been 
deemed
+critical information to track, which then also allows for the real time 
tracking
+and aggregate query capabilities. The Mango execution stats can be ported into
+CSRT itself and just become one subset of the stats tracked as a whole, and
+similarly, any additional desired stats tracking can be easily added and will
+be picked up in the RPC deltas and process lifetime reports.
+
+# CSRT Config Keys
+
+## -define(CSRT, "csrt").
+
+> config:get("csrt").
+
+Primary CSRT config namespace: contains core settings for enabling different
+layers of functionality in CSRT, along with global config settings for limiting
+data volume generation.
+
+## -define(CSRT_MATCHERS_ENABLED, "csrt_logger.matchers_enabled").
+
+> config:get("csrt_logger.matchers_enabled").
+
+Config toggles for enabling specific builtin logger matchers, see the dedicated
+section below on `# CSRT Default Matchers`.
+
+## -define(CSRT_MATCHERS_THRESHOLD, "csrt_logger.matchers_threshold").
+
+> config:get("csrt_logger.matchers_threshold").
+
+Config settings for defining the primary `Threshold` value of the builtin 
logger
+matchers, see the dedicated section below on `# CSRT Default Matchers`.
+
+## -define(CSRT_MATCHERS_DBNAMES, "csrt_logger.dbnames_io").
+
+> config:get("csrt_logger.matchers_enabled").
+
+Config section for setting `$db_name = $threshold` resulting in instantiating a
+"dbname_io" logger matcher for each `$db_name` that will generate a CSRT
+lifecycle report for any contexts that that induced more operations on _any_ 
one
+field of `ioq_calls|get_kv_node|get_kp_node|docs_read|rows_read` that is 
greater
+than `$threshold` and is on database `$db_name`.
+
+This is basically a simple matcher for finding heavy IO requests on a 
particular
+database, in a manner amenable to key/value pair specifications in this .ini
+file until a more sophisticated declarative model exists. In particular, it's
+not easy to sequentially generate matchspecs by way `ets:fun2ms/1`, and so an
+alternative mechanism for either dynamically assembling an `#rctx{}` to match
+against or generating the raw matchspecs themselves is warranted.
+
+## -define(CSRT_INIT_P, "csrt.init_p").
+
+> config:get("csrt.init_p").
+
+Config toggles for tracking counters on spawning of RPC `fabric_rpc` workers by
+way of `rexi_server:init_p`. This allows us to conditionally enable new metrics
+for the desired RPC operations in an expandable manner, without having to add
+new stats for every single potential RPC operation. These are for the 
individual
+metrics to track, the feature is enabled by way of the config toggle
+`config:get(?CSRT, "enable_init_p")`, and these configs can left alone for the
+most part until new operations are tracked.
+
+# CSRT Code Markers
+
+## -define(CSRT_ETS, csrt_server).
+
+This is the reference to the CSRT ets table, it's managed by `csrt_server` so
+that's where the name originates from.
+
+## -define(MATCHERS_KEY, {csrt_logger, all_csrt_matchers}).
+
+This marker is where the active matchers are written to in `persisten_term` for
+concurrently and parallelly and accessing the logger matchers in the CSRT
+tracker processes for lifecycle reporting.
+
+# CSRT Process Dictionary Markers
+
+## -define(PID_REF, {csrt, pid_ref}).
+
+This marker is for the core storing the core `PidRef` identifier. The key idea
+here is that a lifecycle is a context lifecycle is contained to within the 
given
+`PidRef`, meaning that a `Pid` can instantiate different CSRT lifecycles and
+pass those to different workers.
+
+This is specifically necessary for long running processes that need to handle
+many CSRT context lifecycles over the course of that individual process's
+lifecycle independent. In practice, this is immediately needed for the actual
+coordinator lifecycle tracking, as `chttpd` uses a worker pool of http request
+handlers that can be re-used, so we need a way to create a CSRT lifecycle
+corresponding to the given request currently being serviced. This is also
+intended to be used in other long running processes, like IOQ or `couch_js` 
pids
+such that we can track the specific context inducing the operations on the
+`couch_file` pid or indexer or replicator or whatever.
+
+Worker processes have a more clear cut lifecycle, but either style of process
+can be exit'ed in a manner that skips the ability to do cleanup operations, so
+additionally there's a dedicated tracker process spawned to monitor the process
+that induced the CSRT context such that we can do the dynamic logger matching
+directly in these tracker processes and also we can properly cleanup the ets
+entries even if the Pid crashes.
+
+## -define(TRACKER_PID, {csrt, tracker}).
+
+A handle to the spawned tracker process that does cleanup and logger matching
+reprots at the end of the process lifecycle. We store a reference to the 
tracker
+pid so that for explicit context destruction, like in `chttpd` workers after a
+request has been serviced, we can update stop the tracker and perform the
+expected cleanup directly.
+
+## -define(DELTA_TA, {csrt, delta_ta}).
+
+This stores our last delta snapshot to track progress since the last 
incremental
+streaming of stats back to the coordinator process. This will be updated after
+the next delta is made with the latest value. Eg this stores `T0` so we can do
+`T1 = get_resource()` `make_delta(T0, T1)` and then we save `T1` as the new 
`T0`
+for use in our next delta.
+
+## -define(LAST_UPDATED, {csrt, last_updated}).
+
+This stores the integer corresponding to the `erlang:monotonic_time()` value of
+the most recent `updated_at` value. Basically this lets us utilize a pdict
+value to be able to turn `update_at` tracking into an incremental operation 
that
+can be chained in the existing atomic `ets:update_counter` and
+`ets:update_element` calls.
+
+The issue being that our updates are of the form `+2 to ioq_calls for 
$pid_ref`,
+which ets does atomically in a guaranteed `atomic` and `isolated` manner. The
+strict use of the atomic operations for tracking these values is why this
+system works effeciently at scale. This means that we can increment counters on
+all of the stats counter fields in a batch, very quickly, but for tracking
+`updated_at` timestamps we'd need to either do an extra ets call to get the 
last
+`updated_at` value, or do an extra ets call to `ets:update_element` to set the
+`updated_at` value to `csrt_util:tnow()`. The core problem with this is that 
the
+batch inc operation is essentially the only write operation performed after the
+initial context setting of dbname/handler/etc; this means that we'd literally
+double the number of ets calls induced to track CSRT updates, just for tracking
+the `updated_at`. So instead, we rely on the fact that the local process
+corresponding to `$pid_ref` is the _only_ process doing updates so we know the
+last `updated_at` value will be the last time this process updated the data. So
+we track that value in the pdict and then take a delta between `tnow()` and
+`updated_at`, and then `updated_at` becomes a value we can sneak into the other
+integer counter updates we're already performing!
+
+# Primary Config Toggles
+
+# CSRT (?CSRT="csrt") Config Settings
+
+## config:get(?CSRT, "enable", false).
+
+Core enablement toggle for CSRT, defaults to false. Enabling this setting
+intiates local CSRT stats collection as well as shipping deltas in RPC
+responses to accumulate in the coordinator.
+
+This does _not_ trigger the new RPC spawn metrics, and it does not enable
+reporting for any of the rctx types.
+
+*NOTE*: you *MUST* have all nodes in the cluster running a CSRT aware CouchDB
+_before_ you enable it on any node, otherwise the old version nodes won't know
+how to handle the new RPC formats including an embedded Delta payload.
+
+## config:get(?CSRT, "enable_init_p", false).
+
+Enablement of tracking new metric counters for different `fabric_rpc` 
operations
+types to track spawn rates of RPC work induced across the cluster. There is
+corresponding config lookups into the `?CSRT_INIT_P` namespace for keys of the
+form: `atom_to_list(Mod) ++ "__" atom_to_list(Fun)`, eg 
`"fabric_rpc__open_doc"`
+for enabling the specific RPC endpoints.
+
+However, those individual settings can be ignored and this top level config
+toggle is what should be used in general, as the function specific config
+toggles predominantly exist to enable tracking a subet of total RPC operations
+in the cluster, and new endpoints can be added here.
+
+## config:get(?CSRT, "enable_reporting", false).
+
+This is the primary toggle for enabling CSRT process lifetime reports 
containing
+detailed information about the quantity of work induced by the given
+request/worker/etc. This is the top level toggle for enabling _any_ reporting,
+and there also exists `config:get(?CSRT, "enable_rpc_reporting", false).` to
+disable the reporting of any individual RPC workers, leaving the coordinator
+responsible of generating a report with the accumulated deltas.
+
+## config:get(?CSRT, "enable_rpc_reporting", false).
+
+This enables the possibility of RPC workers generating reports. They still need
+to hit the configured thresholds to induce a report, but this will generate 
CSRT
+process lifetime reports for individual RPC workers that trigger the configured
+logger thresholds. This allows for quantifying per node resource usage when
+desired, as otherwise the reports are at the http request level and don't
+provide per node stats.
+
+The key idea here is that having RPC level CSRT process lifetime reporting is
+incredibly useful, but can also generate large quantities of data. For example,
+a view query on a Q=64 database will stream results from 64 shard replicas,
+resulting in at least 64 RPC reports, plus any that might have been generated
+from RPC workers that "lost" the race for shard replica. This is very useful,
+but a lot of data given the verbose nature of funneling it through the RSyslog
+reports, however, the ability to write directly to something like ClickHouse or
+another columnar store would be great.

Review Comment:
   The use of strictly monotonically increasing integer counter updates was 
specifically to make it easy to do calculus on the stats collected for rate of 
change and aggregations, so if you take a snapshot at time A and another 
snapshot at time B, the delta between stats divided by the window, and the 
delta itself is the total work done in that window.
   
   Put differently, the derivative from A to B gives the rate of change in that 
time window as opposed to the integral giving the total usage induced.
   
   CSRT's prupose is to get this data and make it available, but realistically 
you want a dedicated analysis stack like ClickHouse to do integrals on reports 
over time, or even `awk` over logs, but we don't preserve the logs in memory 
for this type of analysis. You can easily do the derivatives by way of 
`csrt:proc_window/3`, built on top of `recon:proc_window/3`, and exposing that 
over the HTTP endpoint is a great next PR on top of this work.
   
   See the HTTP docs and `csrt_query` docs for more info on the query 
capabilities. Much care has been taken to avoid accumulation of large 
quantities of `#rctx{}` entries, so this first version of CSRT cleans things 
out expeditiously.
   
   Another key idea of CSRT in regards to this comment is that it specifically 
tracks metrics we're already tracking so that you can look at the already 
existing node level metrics to see larger time series trends, and CSRT provides 
the high cardinality insight into what is actually utilizing those resources.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to