https://bugzilla.wikimedia.org/show_bug.cgi?id=49762

Ori Livneh <[email protected]> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |[email protected]

--- Comment #2 from Ori Livneh <[email protected]> ---
I think your options are as follows:

For request / error / exception logging, you can send datagrams to
fluorine.eqiad.wmnet:8420 in the format of '<sequence id>\t<log name> <log
message>', where sequence ID is a uint64_t number that increments with every
transmission.

(At least in theory. The sequence ID can be used as a basis for log sampling
and for detecting packet loss, but neither is going to be useful for you, so I
don't think there'd be any practical consequences to sending some constant
number instead.)

The udp2log instance on fluorine (which is the MediaWiki log aggregator for the
production cluster) will write each log message to /a/mw-log/<log file>.log,
which gets rotated and archived daily. (It was formerly the case that someone
with root would have to touch the file once for log2udp to start writing the
log stream, but I am told this is no longer required. YMMV.)

Because log messages go into flat files, this doesn't buy you much by way of
fancy queryability. The primary benefit would likely be an increased chance
that someone unfamiliar with Parsoid would be able to find the log files in
case of an outage.

There are also some tentative plans for writing this log data into a log
analysis tool like logstash, and by logging to fluorine you make it more likely
that your logs would be ported over for you at some point without you needing
to do anything about it.

If you want the ability to slice and dice the log data using some high-level
query language, you can create an event schema on metawiki, then log JSON blobs
confirming to that schema via UDP to vanadium.eqiad.wmnet:8421. Your schema
will be used to create both a MySQL table on db1047 and a MongoDB collection on
vanadium and each entry will be written to both. See
<https://meta.wikimedia.org/wiki/Schema:MobileWebCentralAuthError> for an
example.

For logging stats (like counters and time measurements), use StatsD! There's an
instance on the cluster that writes to either/both Ganglia and Graphite. The
StatsD protocol is also UDP-based, and it is extremely simple: see
<https://github.com/b/statsd_spec> for the format. You'll get
automatically-generated graphs representing various summary figures computed at
a regular interval.

-- 
You are receiving this mail because:
You are on the CC list for the bug.
_______________________________________________
Wikibugs-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikibugs-l

Reply via email to