Joal has submitted this change and it was merged.
Change subject: Add initial oozie job for ApiAction
......................................................................
Add initial oozie job for ApiAction
These changes build on I4c1436f5062f66f76971feea4ba16597ad4159f7
(CirrusSearchRequestSet) and add some abstraction for the parts that
would have been cut-n-paste copies otherwise:
* refinery-drop-cirrus-searchrequest-set-partitions renamed to
refinery-drop-hourly-partitions and default values for operating on
CirrusSearchRequestSet removed. Jobs using
refinery-drop-cirrus-searchrequest-set-partitions should be updated to
use refinery-drop-hourly-partitions and pass
`--table=CirrusSearchRequestSet`.
* Fixed a minor logic bug in refinery-drop-hourly-partitions when table
location is not specified.
* oozie/mediawiki/cirrus-searchrequest-set/load coordinator and workflow
moved to oozie/mediawiki/load and turned into a bundle.
* New bundle extended to also control a load job for ApiAction.
Bug: T108618
Change-Id: Ie5a402018b347a6fbcd08fb25ebdf59e7ca2bdf4
---
R bin/refinery-drop-hourly-partitions
A hive/mediawiki/api-action/create_ApiAction_table.hql
A oozie/mediawiki/README.md
D oozie/mediawiki/cirrus-searchrequest-set/README.md
D oozie/mediawiki/cirrus-searchrequest-set/datasets_raw.xml
D oozie/mediawiki/cirrus-searchrequest-set/load/coordinator.properties
A oozie/mediawiki/datasets_raw.xml
A oozie/mediawiki/load/bundle.properties
A oozie/mediawiki/load/bundle.xml
R oozie/mediawiki/load/coordinator.xml
R oozie/mediawiki/load/workflow.xml
11 files changed, 257 insertions(+), 101 deletions(-)
Approvals:
Ottomata: Looks good to me, but someone else must approve
Joal: Verified; Looks good to me, approved
diff --git a/bin/refinery-drop-cirrus-searchrequest-set-partitions
b/bin/refinery-drop-hourly-partitions
similarity index 93%
rename from bin/refinery-drop-cirrus-searchrequest-set-partitions
rename to bin/refinery-drop-hourly-partitions
index 951139f..1a44690 100755
--- a/bin/refinery-drop-cirrus-searchrequest-set-partitions
+++ b/bin/refinery-drop-hourly-partitions
@@ -17,17 +17,17 @@
# export PYTHONPATH=$PYTHONPATH:/path/to/refinery/python
"""
-Automatically drops old Hive partitions from the CirrusSearchRequestSet table
+Automatically drops old Hive partitions from the table
and deletes the hourly time bucketed directory from HDFS.
-Usage: refinery-drop-cirrus-searchrequest-set-partitions [options]
+Usage: refinery-drop-hourly-partitions [options]
Options:
-h --help Show this help message and exit.
-d --older-than-days=<days> Drop data older than this number of
days. [default: 60]
-D --database=<dbname> Hive database name. [default: default]
- -t --table=<table> Name of CirrusSearchRequestSet table.
[default: CirrusSearchRequestSet]
- -l --location=<location> Base HDFS location path of the
CirrusSearchRequestSet table. If not
+ -t --table=<table> Name of table.
+ -l --location=<location> Base HDFS location path of the table.
If not
specified, this will be inferred from
the table schema metadata.
-o --hive-options=<options> Any valid Hive CLI options you want to
pass to Hive commands.
Example: '--auxpath
/path/to/hive-serdes-1.0-SNAPSHOT.jar'
@@ -73,7 +73,7 @@
datefmt='%Y-%m-%dT%H:%M:%S')
- if not HdfsUtils.validate_path(table_location):
+ if table_location and not HdfsUtils.validate_path(table_location):
logging.error('{0} table location \'{1}\' is not a valid HDFS path.
Path must start with \'/\' or \'hdfs://\'. Aborting.'
.format(table, table_location))
sys.exit(1)
@@ -82,7 +82,7 @@
# Instantiate HiveUtils.
hive = HiveUtils(database, hive_options)
- # The base location of this CirrusSearchRequestSet table in HDFS.
+ # The base location of this table in HDFS.
# If it was not provided via the CLI, then attempt to
# infer if from the table metadata.
if table_location == None:
diff --git a/hive/mediawiki/api-action/create_ApiAction_table.hql
b/hive/mediawiki/api-action/create_ApiAction_table.hql
new file mode 100644
index 0000000..2aff95e
--- /dev/null
+++ b/hive/mediawiki/api-action/create_ApiAction_table.hql
@@ -0,0 +1,39 @@
+-- Create ApiAction table
+--
+-- MediaWiki Action API (api.php) requests
+--
+-- NOTE: the schema is embedded in the CREATE statement.
+-- This is not ideal because:
+-- * We'll have to re-create the table when we change the schema
+-- * If schema grows a bit it might go over the length hive has allotted to
+-- this field and thus table creation will fail.
+--
+-- See https://phabricator.wikimedia.org/T118155 for more details
+--
+-- TODO: review this script and use avro.schema.url once we have
+-- an official repo in hdfs for schemas.
+--
+-- Parameters:
+-- None
+-- Usage:
+-- hive -f create_ApiAction_table.hql --database wmf_raw
+--
+
+CREATE EXTERNAL TABLE ApiAction
+PARTITIONED BY (
+ `year` string,
+ `month` string,
+ `day` string,
+ `hour` string)
+ROW FORMAT SERDE
+ 'org.apache.hadoop.hive.serde2.avro.AvroSerDe'
+STORED AS INPUTFORMAT
+ 'org.apache.hadoop.hive.ql.io.avro.AvroContainerInputFormat'
+OUTPUTFORMAT
+ 'org.apache.hadoop.hive.ql.io.avro.AvroContainerOutputFormat'
+LOCATION
+ 'hdfs://analytics-hadoop/wmf/data/raw/mediawiki/mediawiki_ApiAction/hourly'
+TBLPROPERTIES (
+'avro.schema.literal'='{"type":"record","name":"ActionApi","namespace":"org.wikimedia.analytics.schemas","fields":[{"name":"ts","type":"int","default":0},{"name":"ip","type":"string","default":""},{"name":"userAgent","type":"string","default":""},{"name":"wiki","type":"string","default":""},{"name":"timeSpentBackend","type":"int","default":-1},{"name":"hadError","type":"boolean","default":false},{"name":"errorCodes","type":{"type":"array","items":"string"},"default":[]},{"name":"params","type":{"type":"map","values":"string"},"default":{}}]}'
+)
+;
diff --git a/oozie/mediawiki/README.md b/oozie/mediawiki/README.md
new file mode 100644
index 0000000..0f93466
--- /dev/null
+++ b/oozie/mediawiki/README.md
@@ -0,0 +1,12 @@
+This directory contains the dataset definition and coordinators that launch
+jobs specific to data loaded from MediaWiki's Avro+Kafka data pipeline.
+
+If you are producing a new Avro dataset via Mediawiki Monolog and Kafka,
+you should use these Oozie configs to import your data and automatically add
Hive partitions to it. Most things needed to do this are abstracted here via
the 'channel' property that is distinct for each coordinator launched by
bundle.xml.
+
+Steps to add a new coordinator:
+
+- Add a CREATE TABLE hive file in hive/mediawiki and create your table in Hive.
+- Add a new coordinator declaration in bundle.xml and set $channel
+ and $raw_data_directory appropriately.
+- Relaunch the bundle.
diff --git a/oozie/mediawiki/cirrus-searchrequest-set/README.md
b/oozie/mediawiki/cirrus-searchrequest-set/README.md
deleted file mode 100644
index e00a9ad..0000000
--- a/oozie/mediawiki/cirrus-searchrequest-set/README.md
+++ /dev/null
@@ -1,2 +0,0 @@
-This directory contains the dataset definition and coordinators that launch
-jobs specific to the wmf_raw.CirrusSearchRequestSet Hive table.
diff --git a/oozie/mediawiki/cirrus-searchrequest-set/datasets_raw.xml
b/oozie/mediawiki/cirrus-searchrequest-set/datasets_raw.xml
deleted file mode 100644
index ee59cda..0000000
--- a/oozie/mediawiki/cirrus-searchrequest-set/datasets_raw.xml
+++ /dev/null
@@ -1,28 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!--
-Defines reusable datasets for raw cirrus_searchrequest_set data.
-Use this dataset in your coordinator.xml files by setting:
-
- ${start_time} - the initial instance of your data.
- Example: 2014-04-01T00:00Z
- ${cirrus_searchrequest_set_raw_data_directory} - Path to directory where
data is time bucketed.
- Example:
/wmf/data/raw/mediawiki/mediawiki_CirrusSearchRequestSet/
--->
-
-<datasets>
- <dataset name="CirrusSearchRequestSet_raw_imported"
- frequency="${coord:hours(1)}"
- initial-instance="${start_time}"
- timezone="Universal">
-
<uri-template>${cirrus_searchrequest_set_raw_data_directory}/hourly/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
- <done-flag>_IMPORTED</done-flag>
- </dataset>
-
- <dataset name="CirrusSearchRequestSet_raw_success"
- frequency="${coord:hours(1)}"
- initial-instance="${start_time}"
- timezone="Universal">
-
<uri-template>${cirrus_searchrequest_set_raw_data_directory}/hourly/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
- <done-flag>_SUCCESS</done-flag>
- </dataset>
-</datasets>
diff --git
a/oozie/mediawiki/cirrus-searchrequest-set/load/coordinator.properties
b/oozie/mediawiki/cirrus-searchrequest-set/load/coordinator.properties
deleted file mode 100644
index ed8036f..0000000
--- a/oozie/mediawiki/cirrus-searchrequest-set/load/coordinator.properties
+++ /dev/null
@@ -1,52 +0,0 @@
-# Configures a coordinator to manage automatically adding Hive partitions to
-# a CirrusSearchRequestSet table. Any of the following properties are
overidable with -D.
-# Usage:
-# oozie job -submit -config
oozie/mediawiki/cirrus-searchrequest-set/load/bundle.properties.
-#
-# NOTE: The $oozie_directory must be synced to HDFS so that all relevant
-# .xml files exist there when this job is submitted.
-
-
-name_node = hdfs://analytics-hadoop
-job_tracker = resourcemanager.analytics.eqiad.wmnet:8032
-queue_name = default
-user = hdfs
-
-# Base path in HDFS to oozie files.
-# Other files will be used relative to this path.
-oozie_directory = ${name_node}/wmf/refinery/current/oozie
-
-# HDFS path to workflow to run.
-workflow_file =
${oozie_directory}/mediawiki/cirrus-searchrequest-set/load/workflow.xml
-
-# HDFS path to cirrus searchrequest set dataset definition
-datasets_raw_file =
${oozie_directory}/mediawiki/cirrus-searchrequest-set/datasets_raw.xml
-
-# Initial import time of the cirrus-searchrequest-set dataset.
-start_time = 2015-11-01T00:00Z
-
-# Time to stop running this coordinator. Year 3000 == never!
-stop_time = 3000-01-01T00:00Z
-
-# Workflow to add a partition
-add_partition_workflow_file =
${oozie_directory}/util/hive/partition/add/workflow.xml
-
-# Workflow to mark a directory as done
-mark_directory_done_workflow_file =
${oozie_directory}/util/mark_directory_done/workflow.xml
-
-# Workflow to send an error email
-send_error_email_workflow_file =
${oozie_directory}/util/send_error_email/workflow.xml
-
-# HDFS path to hive-site.xml file. This is needed to run hive actions.
-hive_site_xml = ${oozie_directory}/util/hive/hive-site.xml
-
-# Fully qualified Hive table name.
-table = wmf_raw.CirrusSearchRequestSet
-
-# HDFS path to directory where the raw data is time bucketed.
-cirrus_searchrequest_set_raw_data_directory =
${name_node}/wmf/data/raw/mediawiki/mediawiki_CirrusSearchRequestSet/
-
-# Coordinator to start.
-oozie.coord.application.path =
${oozie_directory}/mediawiki/cirrus-searchrequest-set/load/coordinator.xml
-oozie.use.system.libpath = true
-oozie.action.external.stats.write = true
diff --git a/oozie/mediawiki/datasets_raw.xml b/oozie/mediawiki/datasets_raw.xml
new file mode 100644
index 0000000..71b929b
--- /dev/null
+++ b/oozie/mediawiki/datasets_raw.xml
@@ -0,0 +1,28 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+Defines reusable datasets for raw MediaWiki data.
+Use this dataset in your coordinator.xml files by setting:
+
+ ${start_time} - the initial instance of your data.
+ Example: 2014-04-01T00:00Z
+ ${raw_data_directory} - Path to directory where data is time bucketed.
+ Example: /wmf/data/raw/mediawiki/mediawiki_ApiAction/
+-->
+
+<datasets>
+ <dataset name="raw_imported"
+ frequency="${coord:hours(1)}"
+ initial-instance="${start_time}"
+ timezone="Universal">
+
<uri-template>${raw_data_directory}/hourly/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
+ <done-flag>_IMPORTED</done-flag>
+ </dataset>
+
+ <dataset name="raw_success"
+ frequency="${coord:hours(1)}"
+ initial-instance="${start_time}"
+ timezone="Universal">
+
<uri-template>${raw_data_directory}/hourly/${YEAR}/${MONTH}/${DAY}/${HOUR}</uri-template>
+ <done-flag>_SUCCESS</done-flag>
+ </dataset>
+</datasets>
diff --git a/oozie/mediawiki/load/bundle.properties
b/oozie/mediawiki/load/bundle.properties
new file mode 100644
index 0000000..c6c2605
--- /dev/null
+++ b/oozie/mediawiki/load/bundle.properties
@@ -0,0 +1,48 @@
+# Configures a bundle to manage automatically adding Hive partitions to
+# a table populated from MediaWiki via Kafka & Avro.
+# Any of the following properties are overidable with -D.
+# Usage:
+# oozie job -submit -config oozie/mediawiki/load/bundle.properties.
+#
+# NOTE: The $oozie_directory must be synced to HDFS so that all relevant
+# .xml files exist there when this job is submitted.
+
+
+name_node = hdfs://analytics-hadoop
+job_tracker = resourcemanager.analytics.eqiad.wmnet:8032
+queue_name = default
+user = hdfs
+
+# Base path in HDFS to oozie files.
+# Other files will be used relative to this path.
+oozie_directory = ${name_node}/wmf/refinery/current/oozie
+
+# HDFS path to coordinator to run for each webrequest_source.
+coordinator_file =
${oozie_directory}/mediawiki/load/coordinator.xml
+
+# HDFS path to workflow to run.
+workflow_file =
${oozie_directory}/mediawiki/load/workflow.xml
+
+# HDFS path to datasets definition
+datasets_raw_file =
${oozie_directory}/mediawiki/datasets_raw.xml
+
+# Initial import time of the dataset.
+start_time = 2015-11-01T00:00Z
+
+# Time to stop running this coordinator. Year 3000 == never!
+stop_time = 3000-01-01T00:00Z
+
+# Hive database for output
+database = wmf_raw
+
+# Coodinators expect to find datasets in $raw_base_directory/mediawiki_$channel
+raw_base_directory = ${name_node}/wmf/data/raw/mediawiki
+
+# Common email addresses to send workflow errors to.
+# Each coordinator can add additional email addresses to notify in bundle.xml
+common_error_to =
[email protected],[email protected],[email protected],[email protected],[email protected],[email protected]
+
+# Coordinator to start.
+oozie.bundle.application.path =
${oozie_directory}/mediawiki/load/bundle.xml
+oozie.use.system.libpath = true
+oozie.action.external.stats.write = true
diff --git a/oozie/mediawiki/load/bundle.xml b/oozie/mediawiki/load/bundle.xml
new file mode 100644
index 0000000..fe36ffc
--- /dev/null
+++ b/oozie/mediawiki/load/bundle.xml
@@ -0,0 +1,56 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<bundle-app xmlns="uri:oozie:bundle:0.2"
+ name="load_mediawiki-bundle">
+
+ <parameters>
+ <!-- Required properties -->
+ <property><name>queue_name</name></property>
+ <property><name>coordinator_file</name></property>
+ <property><name>name_node</name></property>
+ <property><name>job_tracker</name></property>
+ <property><name>workflow_file</name></property>
+ <property><name>start_time</name></property>
+ <property><name>stop_time</name></property>
+ <property><name>database</name></property>
+ <property><name>raw_base_directory</name></property>
+ <property><name>datasets_raw_file</name></property>
+ <property><name>common_error_to</name></property>
+ </parameters>
+
+ <coordinator name="load_mediawiki_CirrusSearchRequestSet-coord">
+ <app-path>${coordinator_file}</app-path>
+ <configuration>
+ <property>
+ <name>channel</name>
+ <value>CirrusSearchRequestSet</value>
+ </property>
+ <property>
+ <name>raw_data_directory</name>
+
<value>${raw_base_directory}/mediawiki_CirrusSearchRequestSet</value>
+ </property>
+ <property>
+ <name>send_error_email_to</name>
+
<value>${common_error_to},[email protected],[email protected],[email protected],[email protected]</value>
+ </property>
+ </configuration>
+ </coordinator>
+
+ <coordinator name="load_mediawiki_ApiAction-coord">
+ <app-path>${coordinator_file}</app-path>
+ <configuration>
+ <property>
+ <name>channel</name>
+ <value>ApiAction</value>
+ </property>
+ <property>
+ <name>raw_data_directory</name>
+ <value>${raw_base_directory}/mediawiki_ApiAction</value>
+ </property>
+ <property>
+ <name>send_error_email_to</name>
+
<value>${common_error_to},[email protected],[email protected],[email protected]</value>
+ </property>
+ </configuration>
+ </coordinator>
+
+</bundle-app>
diff --git a/oozie/mediawiki/cirrus-searchrequest-set/load/coordinator.xml
b/oozie/mediawiki/load/coordinator.xml
similarity index 68%
rename from oozie/mediawiki/cirrus-searchrequest-set/load/coordinator.xml
rename to oozie/mediawiki/load/coordinator.xml
index 1415238..4596762 100644
--- a/oozie/mediawiki/cirrus-searchrequest-set/load/coordinator.xml
+++ b/oozie/mediawiki/load/coordinator.xml
@@ -1,6 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<coordinator-app xmlns="uri:oozie:coordinator:0.4"
- name="load_mediawiki_${table}-coord"
+ name="load_mediawiki-${database}.${channel}-coord"
frequency="${coord:hours(1)}"
start="${start_time}"
end="${stop_time}"
@@ -15,14 +15,39 @@
<property><name>workflow_file</name></property>
<property><name>start_time</name></property>
<property><name>stop_time</name></property>
- <property><name>datasets_raw_file</name></property>
-
<property><name>cirrus_searchrequest_set_raw_data_directory</name></property>
+ <property><name>send_error_email_to</name></property>
+ <property>
+ <name>datasets_raw_file</name>
+ <description>File that describes raw datasets</description>
+ </property>
+ <property>
+ <name>database</name>
+ <description>Hive database to load data into</description>
+ </property>
+ <property>
+ <name>channel</name>
+ <description>MediaWiki debug log channel name</description>
+ </property>
+ <property>
+ <name>raw_data_directory</name>
+ <description>
+ Directory in which the this coordinator will look for
+ hourly data.
+ </description>
+ </property>
- <property><name>hive_site_xml</name></property>
- <property><name>add_partition_workflow_file</name></property>
- <property><name>table</name></property>
- <property><name>mark_directory_done_workflow_file</name></property>
- <property><name>send_error_email_workflow_file</name></property>
+ <property>
+ <name>table</name>
+ <value>${database}.${channel}</value>
+ </property>
+ <property>
+ <name>input_dataset</name>
+ <value>raw_imported</value>
+ </property>
+ <property>
+ <name>output_dataset</name>
+ <value>raw_success</value>
+ </property>
</parameters>
<controls>
@@ -63,17 +88,23 @@
<datasets>
<!--
Include the given $datasets_raw_file file. This should
- define the "CirrusSearchRequestSet_raw" datasets for this coordinator.
+ define the "*_raw" datasets for this coordinator.
-->
<include>${datasets_raw_file}</include>
</datasets>
<input-events>
- <data-in name="input" dataset="CirrusSearchRequestSet_raw_imported">
+ <data-in name="input" dataset="${input_dataset}">
<instance>${coord:current(0)}</instance>
</data-in>
</input-events>
+ <output-events>
+ <data-out name="output" dataset="${output_dataset}">
+ <instance>${coord:current(0)}</instance>
+ </data-out>
+ </output-events>
+
<action>
<workflow>
<app-path>${workflow_file}</app-path>
diff --git a/oozie/mediawiki/cirrus-searchrequest-set/load/workflow.xml
b/oozie/mediawiki/load/workflow.xml
similarity index 76%
rename from oozie/mediawiki/cirrus-searchrequest-set/load/workflow.xml
rename to oozie/mediawiki/load/workflow.xml
index f289e4a..e933b69 100644
--- a/oozie/mediawiki/cirrus-searchrequest-set/load/workflow.xml
+++ b/oozie/mediawiki/load/workflow.xml
@@ -20,11 +20,17 @@
<property><name>job_tracker</name></property>
<property>
+ <name>oozie_directory</name>
+ <value>${name_node}/wmf/refinery/current/oozie</value>
+ </property>
+ <property>
<name>add_partition_workflow_file</name>
+
<value>${oozie_directory}/util/hive/partition/add/workflow.xml</value>
<description>Workflow definition for adding a
partition</description>
</property>
<property>
<name>hive_site_xml</name>
+ <value>${oozie_directory}/util/hive/hive-site.xml</value>
<description>hive-site.xml file path in HDFS</description>
</property>
<property>
@@ -53,10 +59,12 @@
</property>
<property>
<name>mark_directory_done_workflow_file</name>
+
<value>${oozie_directory}/util/mark_directory_done/workflow.xml</value>
<description>Workflow for marking a directory done</description>
</property>
<property>
<name>send_error_email_workflow_file</name>
+
<value>${oozie_directory}/util/send_error_email/workflow.xml</value>
<description>Workflow for sending an error email</description>
</property>
</parameters>
@@ -69,8 +77,16 @@
<propagate-configuration/>
<configuration>
<property>
+ <name>table</name>
+ <value>${table}</value>
+ </property>
+ <property>
<name>partition_spec</name>
<value>year=${year},month=${month},day=${day},hour=${hour}</value>
+ </property>
+ <property>
+ <name>location</name>
+ <value>${location}</value>
</property>
</configuration>
</sub-workflow>
@@ -81,7 +97,7 @@
<!--
This adds an empty _SUCCESS done-flag file into
The directory for which we just added a Hive partition.
- The CirrusSearchRequestSet_raw_partitioned dataset uses this.
+ The *_raw_partitioned dataset uses this.
-->
<action name="mark_add_partition_done">
<sub-workflow>
@@ -103,8 +119,8 @@
<propagate-configuration/>
<configuration>
<property>
- <name>to</name>
-
<value>[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected]</value>
+ <name>to</name>
+ <value>${send_error_email_to}</value>
</property>
<property>
<name>parent_name</name>
@@ -114,6 +130,14 @@
<name>parent_failed_action</name>
<value>${wf:lastErrorNode()}</value>
</property>
+ <property>
+ <name>parent_error_code</name>
+ <value>${wf:errorCode(wf:lastErrorNode())}</value>
+ </property>
+ <property>
+ <name>parent_error_message</name>
+ <value>${wf:errorMessage(wf:lastErrorNode())}</value>
+ </property>
</configuration>
</sub-workflow>
<ok to="kill"/>
--
To view, visit https://gerrit.wikimedia.org/r/273557
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings
Gerrit-MessageType: merged
Gerrit-Change-Id: Ie5a402018b347a6fbcd08fb25ebdf59e7ca2bdf4
Gerrit-PatchSet: 13
Gerrit-Project: analytics/refinery
Gerrit-Branch: master
Gerrit-Owner: BryanDavis <[email protected]>
Gerrit-Reviewer: Anomie <[email protected]>
Gerrit-Reviewer: BryanDavis <[email protected]>
Gerrit-Reviewer: Elukey <[email protected]>
Gerrit-Reviewer: Gergő Tisza <[email protected]>
Gerrit-Reviewer: Joal <[email protected]>
Gerrit-Reviewer: Nuria <[email protected]>
Gerrit-Reviewer: Ottomata <[email protected]>
_______________________________________________
MediaWiki-commits mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits