Hello Ottomata, Gage,

I'd like you to do a code review.  Please visit

    https://gerrit.wikimedia.org/r/185442

to review the following change.

Change subject: Mail webrequest partition status summaries to analytics ops
......................................................................

Mail webrequest partition status summaries to analytics ops

Change-Id: Ieb575dceb491bb0e36f8dd113830f9e98faefe42
---
M manifests/role/analytics/refinery.pp
M manifests/site.pp
2 files changed, 28 insertions(+), 3 deletions(-)


  git pull ssh://gerrit.wikimedia.org:29418/operations/puppet 
refs/changes/42/185442/1

diff --git a/manifests/role/analytics/refinery.pp 
b/manifests/role/analytics/refinery.pp
index f88b9fa..6065ad3 100644
--- a/manifests/role/analytics/refinery.pp
+++ b/manifests/role/analytics/refinery.pp
@@ -92,7 +92,7 @@
     }
 }
 
-# == Class role::analytics::refinery::data::check
+# == Class role::analytics::refinery::data::check::icinga
 # Configures passive/freshness icinga checks or data imports
 # in HDFS.
 #
@@ -107,7 +107,7 @@
 # See: https://phabricator.wikimedia.org/T76414
 #      https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=670373
 #
-class role::analytics::refinery::data::check {
+class role::analytics::refinery::data::check::icinga {
     # We are monitoring hourly datasets.
     # Give Oozie a little time to finish running
     # the monitor_done_flag workflow for each hour.
@@ -154,3 +154,25 @@
         retries         => 1,
     }
 }
+
+# == Class role::analytics::refinery::data::check::hdfs_mount
+# Configures cron jobs that output faultyness of webrequest data
+#
+# These checks walk HDFS through the plain file system.
+#
+class role::analytics::refinery::data::check::hdfs_mount {
+    require role::analytics::refinery
+
+    # This should not be hardcoded.  Instead, one should be able to use
+    # $::cdh::hadoop::mount::mount_point to reference the user supplied
+    # parameter when the cdh::hadoop::mount class is evaluated.
+    # I am not sure why this is not working.
+    $hdfs_mount_point = '/mnt/hdfs'
+
+    cron { 'refinery data check hdfs_mount':
+        command     => 
"${::role::analytics::refinery::path}/bin/refinery-dump-status-webrequest-partitions
 --hdfs-mount ${hdfs_mount_point}"
+        environment => '[email protected],[email protected]',
+        user        => 'stats',
+        hour        => 10,
+        minute      => 0,
+    }
diff --git a/manifests/site.pp b/manifests/site.pp
index 904b772..8b43041 100644
--- a/manifests/site.pp
+++ b/manifests/site.pp
@@ -302,7 +302,7 @@
     # These are passive checks, so if
     # icinga is not notified of a successful import
     # hourly, icinga should generate an alert.
-    include role::analytics::refinery::data::check
+    include role::analytics::refinery::data::check::icinga
 }
 
 
@@ -2298,6 +2298,9 @@
     # to public data generated by the Analytics Cluster.
     include role::analytics::rsyncd
 
+    # Include analytics/refinery checks that rely on the hdfs mount.
+    include role::analytics::refinery::data::check::hdfs_mount
+
     # Include the MySQL research password at
     # /etc/mysql/conf.d/analytics-research-client.cnf
     # and only readable by users in the

-- 
To view, visit https://gerrit.wikimedia.org/r/185442
To unsubscribe, visit https://gerrit.wikimedia.org/r/settings

Gerrit-MessageType: newchange
Gerrit-Change-Id: Ieb575dceb491bb0e36f8dd113830f9e98faefe42
Gerrit-PatchSet: 1
Gerrit-Project: operations/puppet
Gerrit-Branch: production
Gerrit-Owner: QChris <[email protected]>
Gerrit-Reviewer: Gage <[email protected]>
Gerrit-Reviewer: Ottomata <[email protected]>

_______________________________________________
MediaWiki-commits mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/mediawiki-commits

Reply via email to