Hello community,
here is the log from the commit of package golang-github-prometheus-prometheus
for openSUSE:Factory checked in at 2020-06-11 10:00:06
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/golang-github-prometheus-prometheus (Old)
and
/work/SRC/openSUSE:Factory/.golang-github-prometheus-prometheus.new.3606 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Package is "golang-github-prometheus-prometheus"
Thu Jun 11 10:00:06 2020 rev:8 rq:811857 version:2.18.0
Changes:
--------
---
/work/SRC/openSUSE:Factory/golang-github-prometheus-prometheus/golang-github-prometheus-prometheus.changes
2019-12-02 11:34:08.762480951 +0100
+++
/work/SRC/openSUSE:Factory/.golang-github-prometheus-prometheus.new.3606/golang-github-prometheus-prometheus.changes
2020-06-11 10:00:20.366318293 +0200
@@ -1,0 +2,208 @@
+Wed Jun 3 15:59:05 UTC 2020 - Joao Cavalheiro <[email protected]>
+
+- Update change log and spec file
+ + Modified spec file: default to golang 1.14 to avoid "have choice" build
issues in OBS.
+ + Rebase and update patches for version 2.18.0
+ + Changed:
+ * 0001-Do-not-force-the-pure-Go-name-resolver.patch
+ * 0002-Default-settings.patch Changed
+ * 0003-Add-Uyuni-service-discovery.patch
+
+- Update to 2.18.0
+ + Features
+ * Tracing: Added experimental Jaeger support #7148
+ + Changes
+ * Federation: Only use local TSDB for federation (ignore remote read).
#7096
+ * Rules: `rule_evaluations_total` and `rule_evaluation_failures_total`
have a `rule_group` label now. #7094
+ + Enhancements
+ * TSDB: Significantly reduce WAL size kept around after a block cut. #7098
+ * Discovery: Add `architecture` meta label for EC2. #7000
+ + Bug fixes
+ * UI: Fixed wrong MinTime reported by /status. #7182
+ * React UI: Fixed multiselect legend on OSX. #6880
+ * Remote Write: Fixed blocked resharding edge case. #7122
+ * Remote Write: Fixed remote write not updating on relabel configs change.
#7073
+
+- Changes from 2.17.2
+ + Bug fixes
+ * Federation: Register federation metrics #7081
+ * PromQL: Fix panic in parser error handling #7132
+ * Rules: Fix reloads hanging when deleting a rule group that is being
evaluated #7138
+ * TSDB: Fix a memory leak when prometheus starts with an empty TSDB WAL
#7135
+ * TSDB: Make isolation more robust to panics in web handlers #7129 #7136
+
+- Changes from 2.17.1
+ + Bug fixes
+ * TSDB: Fix query performance regression that increased memory and CPU
usage #7051
+
+- Changes from 2.17.0
+ + Features
+ * TSDB: Support isolation #6841
+ * This release implements isolation in TSDB. API queries and recording
rules are
+ guaranteed to only see full scrapes and full recording rules. This comes
with a
+ certain overhead in resource usage. Depending on the situation, there
might be
+ some increase in memory usage, CPU usage, or query latency.
+ + Enhancements
+ * PromQL: Allow more keywords as metric names #6933
+ * React UI: Add normalization of localhost URLs in targets page #6794
+ * Remote read: Read from remote storage concurrently #6770
+ * Rules: Mark deleted rule series as stale after a reload #6745
+ * Scrape: Log scrape append failures as debug rather than warn #6852
+ * TSDB: Improve query performance for queries that partially hit the head
#6676
+ * Consul SD: Expose service health as meta label #5313
+ * EC2 SD: Expose EC2 instance lifecycle as meta label #6914
+ * Kubernetes SD: Expose service type as meta label for K8s service role
#6684
+ * Kubernetes SD: Expose label_selector and field_selector #6807
+ * Openstack SD: Expose hypervisor id as meta label #6962
+ + Bug fixes
+ * PromQL: Do not escape HTML-like chars in query log #6834 #6795
+ * React UI: Fix data table matrix values #6896
+ * React UI: Fix new targets page not loading when using non-ASCII
characters #6892
+ * Remote read: Fix duplication of metrics read from remote storage with
external labels #6967 #7018
+ * Remote write: Register WAL watcher and live reader metrics for all
remotes, not just the first one #6998
+ * Scrape: Prevent removal of metric names upon relabeling #6891
+ * Scrape: Fix 'superfluous response.WriteHeader call' errors when scrape
fails under some circonstances #6986
+ * Scrape: Fix crash when reloads are separated by two scrape intervals
#7011
+
+- Changes from 2.16.0
+ + Features
+ * React UI: Support local timezone on /graph #6692
+ * PromQL: add absent_over_time query function #6490
+ * Adding optional logging of queries to their own file #6520
+ + Enhancements
+ * React UI: Add support for rules page and "Xs ago" duration displays #6503
+ * React UI: alerts page, replace filtering togglers tabs with checkboxes
#6543
+ * TSDB: Export metric for WAL write errors #6647
+ * TSDB: Improve query performance for queries that only touch the most
recent 2h of data. #6651
+ * PromQL: Refactoring in parser errors to improve error messages #6634
+ * PromQL: Support trailing commas in grouping opts #6480
+ * Scrape: Reduce memory usage on reloads by reusing scrape cache #6670
+ * Scrape: Add metrics to track bytes and entries in the metadata cache
#6675
+ * promtool: Add support for line-column numbers for invalid rules output
#6533
+ * Avoid restarting rule groups when it is unnecessary #6450
+ + Bug fixes
+ * React UI: Send cookies on fetch() on older browsers #6553
+ * React UI: adopt grafana flot fix for stacked graphs #6603
+ * React UI: broken graph page browser history so that back button works as
expected #6659
+ * TSDB: ensure compactionsSkipped metric is registered, and log proper
error if one is returned from head.Init #6616
+ * TSDB: return an error on ingesting series with duplicate labels #6664
+ * PromQL: Fix unary operator precedence #6579
+ * PromQL: Respect query.timeout even when we reach query.max-concurrency
#6712
+ * PromQL: Fix string and parentheses handling in engine, which affected
React UI #6612
+ * PromQL: Remove output labels returned by absent() if they are produced
by multiple identical label matchers #6493
+ * Scrape: Validate that OpenMetrics input ends with `# EOF` #6505
+ * Remote read: return the correct error if configs can't be marshal'd to
JSON #6622
+ * Remote write: Make remote client `Store` use passed context, which can
affect shutdown timing #6673
+ * Remote write: Improve sharding calculation in cases where we would
always be consistently behind by tracking pendingSamples #6511
+ * Ensure prometheus_rule_group metrics are deleted when a rule group is
removed #6693
+
+- Changes from 2.15.2
+ + Bug fixes
+ * TSDB: Fixed support for TSDB blocks built with Prometheus before 2.1.0.
#6564
+ * TSDB: Fixed block compaction issues on Windows. #6547
+
+- Changes from 2.15.1
+ + Bug fixes
+ * TSDB: Fixed race on concurrent queries against same data. #6512
+
+- Changes from 2.15.0
+ + Features
+ * API: Added new endpoint for exposing per metric metadata `/metadata`.
#6420 #6442
+ + Changes
+ * Discovery: Removed `prometheus_sd_kubernetes_cache_*` metrics.
Additionally `prometheus_sd_kubernetes_workqueue_latency_seconds` and
`prometheus_sd_kubernetes_workqueue_work_duration_seconds` metrics now show
correct values in seconds. #6393
+ * Remote write: Changed `query` label on `prometheus_remote_storage_*`
metrics to `remote_name` and `url`. #6043
+ + Enhancements
+ * TSDB: Significantly reduced memory footprint of loaded TSDB blocks.
#6418 #6461
+ * TSDB: Significantly optimized what we buffer during compaction which
should result in lower memory footprint during compaction. #6422 #6452 #6468
#6475
+ * TSDB: Improve replay latency. #6230
+ * TSDB: WAL size is now used for size based retention calculation. #5886
+ * Remote read: Added query grouping and range hints to the remote read
request #6401
+ * Remote write: Added `prometheus_remote_storage_sent_bytes_total` counter
per queue. #6344
+ * promql: Improved PromQL parser performance. #6356
+ * React UI: Implemented missing pages like `/targets` #6276, TSDB status
page #6281 #6267 and many other fixes and performance improvements.
+ * promql: Prometheus now accepts spaces between time range and square
bracket. e.g `[ 5m]` #6065
+ + Bug fixes
+ * Config: Fixed alertmanager configuration to not miss targets when
configurations are similar. #6455
+ * Remote write: Value of `prometheus_remote_storage_shards_desired` gauge
shows raw value of desired shards and it's updated correctly. #6378
+ * Rules: Prometheus now fails the evaluation of rules and alerts where
metric results collide with labels specified in `labels` field. #6469
+ * API: Targets Metadata API `/targets/metadata` now accepts empty
`match_targets` parameter as in the spec. #6303
+
+- Changes from 2.14.0
+ + Features
+ * API: `/api/v1/status/runtimeinfo` and `/api/v1/status/buildinfo`
endpoints added for use by the React UI. #6243
+ * React UI: implement the new experimental React based UI. #5694 and many
more
+ * Can be found by under `/new`.
+ * Not all pages are implemented yet.
+ * Status: Cardinality statistics added to the Runtime & Build Information
page. #6125
+ + Enhancements
+ * Remote write: fix delays in remote write after a compaction. #6021
+ * UI: Alerts can be filtered by state. #5758
+ + Bug fixes
+ * Ensure warnings from the API are escaped. #6279
+ * API: lifecycle endpoints return 403 when not enabled. #6057
+ * Build: Fix Solaris build. #6149
+ * Promtool: Remove false duplicate rule warnings when checking rule files
with alerts. #6270
+ * Remote write: restore use of deduplicating logger in remote write. #6113
+ * Remote write: do not reshard when unable to send samples. #6111
+ * Service discovery: errors are no longer logged on context cancellation.
#6116, #6133
+ * UI: handle null response from API properly. #6071
+
+- Changes from 2.13.1
+ + Bug fixes
+ * Fix panic in ARM builds of Prometheus. #6110
+ * promql: fix potential panic in the query logger. #6094
+ * Multiple errors of http: superfluous response.WriteHeader call in the
logs. #6145
+
+- Changes from 2.13.0
+ + Enhancements
+ * Metrics: renamed prometheus_sd_configs_failed_total to
prometheus_sd_failed_configs and changed to Gauge #5254
+ * Include the tsdb tool in builds. #6089
+ * Service discovery: add new node address types for kubernetes. #5902
+ * UI: show warnings if query have returned some warnings. #5964
+ * Remote write: reduce memory usage of the series cache. #5849
+ * Remote read: use remote read streaming to reduce memory usage. #5703
+ * Metrics: added metrics for remote write max/min/desired shards to queue
manager. #5787
+ * Promtool: show the warnings during label query. #5924
+ * Promtool: improve error messages when parsing bad rules. #5965
+ * Promtool: more promlint rules. #5515
+ + Bug fixes
+ * UI: Fix a Stored DOM XSS vulnerability with query history
[CVE-2019-10215](http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-10215).
#6098
+ * Promtool: fix recording inconsistency due to duplicate labels. #6026
+ * UI: fixes service-discovery view when accessed from unhealthy targets.
#5915
+ * Metrics format: OpenMetrics parser crashes on short input. #5939
+ * UI: avoid truncated Y-axis values. #6014
+
+- Changes from 2.12.0
+ + Features
+ * Track currently active PromQL queries in a log file. #5794
+ * Enable and provide binaries for `mips64` / `mips64le` architectures.
#5792
+ + Enhancements
+ * Improve responsiveness of targets web UI and API endpoint. #5740
+ * Improve remote write desired shards calculation. #5763
+ * Flush TSDB pages more precisely. tsdb#660
+ * Add `prometheus_tsdb_retention_limit_bytes` metric. tsdb#667
+ * Add logging during TSDB WAL replay on startup. tsdb#662
+ * Improve TSDB memory usage. tsdb#653, tsdb#643, tsdb#654, tsdb#642,
tsdb#627
+ + Bug fixes
+ * Check for duplicate label names in remote read. #5829
+ * Mark deleted rules' series as stale on next evaluation. #5759
+ * Fix JavaScript error when showing warning about out-of-sync server time.
#5833
+ * Fix `promtool test rules` panic when providing empty `exp_labels`. #5774
+ * Only check last directory when discovering checkpoint number. #5756
+ * Fix error propagation in WAL watcher helper functions. #5741
+ * Correctly handle empty labels from alert templates. #5845
+
+-------------------------------------------------------------------
+Wed May 13 10:12:43 UTC 2020 - Joao Cavalheiro <[email protected]>
+
++++ 11 more lines (skipped)
++++ between
/work/SRC/openSUSE:Factory/golang-github-prometheus-prometheus/golang-github-prometheus-prometheus.changes
++++ and
/work/SRC/openSUSE:Factory/.golang-github-prometheus-prometheus.new.3606/golang-github-prometheus-prometheus.changes
Old:
----
prometheus-2.11.1.tar.xz
New:
----
prometheus-2.18.0.tar.xz
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ golang-github-prometheus-prometheus.spec ++++++
--- /var/tmp/diff_new_pack.47jWuP/_old 2020-06-11 10:00:27.766342126 +0200
+++ /var/tmp/diff_new_pack.47jWuP/_new 2020-06-11 10:00:27.770342139 +0200
@@ -1,7 +1,7 @@
#
# spec file for package golang-github-prometheus-prometheus
#
-# Copyright (c) 2019 SUSE LINUX GmbH, Nuernberg, Germany.
+# Copyright (c) 2020 SUSE LLC
# Copyright (c) 2017 Silvio Moioli <[email protected]>
#
# All modifications and additions to the file contributed by third parties
@@ -32,12 +32,12 @@
%{go_nostrip}
Name: golang-github-prometheus-prometheus
-Version: 2.11.1
+Version: 2.18.0
Release: 0
Summary: The Prometheus monitoring system and time series database
License: Apache-2.0
Group: System/Management
-Url: https://prometheus.io/
+URL: https://prometheus.io/
Source: prometheus-%{version}.tar.xz
Source1: prometheus.service
Source2: prometheus.yml
@@ -54,7 +54,7 @@
BuildRequires: golang-github-prometheus-promu
BuildRequires: golang-packaging
BuildRequires: xz
-BuildRequires: golang(API) >= 1.12
+BuildRequires: golang(API) = 1.14
BuildRoot: %{_tmppath}/%{name}-%{version}-build
%{?systemd_requires}
Requires(pre): shadow
++++++ 0001-Do-not-force-the-pure-Go-name-resolver.patch ++++++
--- /var/tmp/diff_new_pack.47jWuP/_old 2020-06-11 10:00:27.790342204 +0200
+++ /var/tmp/diff_new_pack.47jWuP/_new 2020-06-11 10:00:27.790342204 +0200
@@ -15,18 +15,16 @@
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.promu.yml b/.promu.yml
-index 5937513b..468ba2b5 100644
+index 0541bec..ad6cc78 100644
--- a/.promu.yml
+++ b/.promu.yml
-@@ -10,7 +10,7 @@ build:
- path: ./cmd/prometheus
- - name: promtool
+@@ -12,7 +12,7 @@ build:
path: ./cmd/promtool
-- flags: -mod=vendor -a -tags netgo
-+ flags: -mod=vendor -a
+ - name: tsdb
+ path: ./tsdb/cmd/tsdb
+- flags: -mod=vendor -a -tags netgo,builtinassets
++ flags: -mod=vendor -a -tags builtinassets
ldflags: |
-X github.com/prometheus/common/version.Version={{.Version}}
-X github.com/prometheus/common/version.Revision={{.Revision}}
---
-2.20.1
++++++ 0002-Default-settings.patch ++++++
--- /var/tmp/diff_new_pack.47jWuP/_old 2020-06-11 10:00:27.802342242 +0200
+++ /var/tmp/diff_new_pack.47jWuP/_new 2020-06-11 10:00:27.802342242 +0200
@@ -9,10 +9,10 @@
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/cmd/prometheus/main.go b/cmd/prometheus/main.go
-index 5529f0d..d6a18ff 100644
+index 2b70381..03af484 100644
--- a/cmd/prometheus/main.go
+++ b/cmd/prometheus/main.go
-@@ -133,7 +133,7 @@ func main() {
+@@ -143,7 +143,7 @@ func main() {
a.HelpFlag.Short('h')
a.Flag("config.file", "Prometheus configuration file path.").
@@ -21,7 +21,7 @@
a.Flag("web.listen-address", "Address to listen on for UI, API, and
telemetry.").
Default("0.0.0.0:9090").StringVar(&cfg.web.ListenAddress)
-@@ -163,10 +163,10 @@ func main() {
+@@ -173,10 +173,10 @@ func main() {
Default("false").BoolVar(&cfg.web.EnableAdminAPI)
a.Flag("web.console.templates", "Path to the console template
directory, available at /consoles.").
@@ -34,7 +34,7 @@
a.Flag("web.page-title", "Document title of Prometheus instance.").
Default("Prometheus Time Series Collection and Processing
Server").StringVar(&cfg.web.PageTitle)
-@@ -175,7 +175,7 @@ func main() {
+@@ -185,7 +185,7 @@ func main() {
Default(".*").StringVar(&cfg.corsRegexString)
a.Flag("storage.tsdb.path", "Base path for metrics storage.").
++++++ 0003-Add-Uyuni-service-discovery.patch ++++++
--- /var/tmp/diff_new_pack.47jWuP/_old 2020-06-11 10:00:27.810342268 +0200
+++ /var/tmp/diff_new_pack.47jWuP/_new 2020-06-11 10:00:27.810342268 +0200
@@ -1,5 +1,5 @@
diff --git a/discovery/config/config.go b/discovery/config/config.go
-index 820de1f7..27d8c0cc 100644
+index 820de1f..27d8c0c 100644
--- a/discovery/config/config.go
+++ b/discovery/config/config.go
@@ -27,6 +27,7 @@ import (
@@ -20,7 +20,7 @@
// Validate validates the ServiceDiscoveryConfig.
diff --git a/discovery/manager.go b/discovery/manager.go
-index 1dbdecc8..ac621f3e 100644
+index 66c0057..f65cd04 100644
--- a/discovery/manager.go
+++ b/discovery/manager.go
@@ -37,6 +37,7 @@ import (
@@ -31,7 +31,7 @@
"github.com/prometheus/prometheus/discovery/zookeeper"
)
-@@ -406,6 +407,11 @@ func (m *Manager) registerProviders(cfg
sd_config.ServiceDiscoveryConfig, setNam
+@@ -414,6 +415,11 @@ func (m *Manager) registerProviders(cfg
sd_config.ServiceDiscoveryConfig, setNam
return triton.New(log.With(m.logger, "discovery",
"triton"), c)
})
}
@@ -45,10 +45,10 @@
return &StaticProvider{TargetGroups:
cfg.StaticConfigs}, nil
diff --git a/discovery/uyuni/uyuni.go b/discovery/uyuni/uyuni.go
new file mode 100644
-index 00000000..fcaaad0f
+index 00000000..18e0cfce
--- /dev/null
+++ b/discovery/uyuni/uyuni.go
-@@ -0,0 +1,340 @@
+@@ -0,0 +1,298 @@
+// Copyright 2019 The Prometheus Authors
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
@@ -66,13 +66,11 @@
+
+import (
+ "context"
-+ "encoding/json"
+ "fmt"
+ "net/http"
+ "net/url"
+ "regexp"
+ "strings"
-+ "sync"
+ "time"
+
+ "github.com/go-kit/kit/log"
@@ -86,8 +84,11 @@
+)
+
+const (
-+ uyuniLabel = model.MetaLabelPrefix + "uyuni_"
-+ uyuniLabelEntitlements = uyuniLabel + "entitlements"
++ uyuniLabel = model.MetaLabelPrefix + "uyuni_"
++ uyuniLabelEntitlements = uyuniLabel + "entitlements"
++ monitoringEntitlementLabel = "monitoring_entitled"
++ prometheusExporterFormulaName = "prometheus-exporters"
++ uyuniXMLRPCAPIPath = "/rpc/api"
+)
+
+// DefaultSDConfig is the default Uyuni SD configuration.
@@ -107,25 +108,15 @@
+}
+
+// Uyuni API Response structures
-+type clientRef struct {
-+ ID int `xmlrpc:"id"`
-+ Name string `xmlrpc:"name"`
-+}
-+
-+type systemDetail struct {
-+ ID int `xmlrpc:"id"`
-+ Hostname string `xmlrpc:"hostname"`
-+ Entitlements []string `xmlrpc:"addon_entitlements"`
-+}
-+
-+type groupDetail struct {
-+ ID int `xmlrpc:"id"`
-+ Subscribed int `xmlrpc:"subscribed"`
-+ SystemGroupName string `xmlrpc:"system_group_name"`
++type systemGroupID struct {
++ GroupID int `xmlrpc:"id"`
++ GroupName string `xmlrpc:"name"`
+}
+
+type networkInfo struct {
-+ IP string `xmlrpc:"ip"`
++ SystemID int `xmlrpc:"system_id"`
++ Hostname string `xmlrpc:"hostname"`
++ IP string `xmlrpc:"ip"`
+}
+
+type exporterConfig struct {
@@ -179,46 +170,52 @@
+ return err
+}
+
-+// Get system list
-+func listSystems(rpcclient *xmlrpc.Client, token string) ([]clientRef, error)
{
-+ var result []clientRef
-+ err := rpcclient.Call("system.listSystems", token, &result)
-+ return result, err
-+}
-+
-+// Get system details
-+func getSystemDetails(rpcclient *xmlrpc.Client, token string, systemID int)
(systemDetail, error) {
-+ var result systemDetail
-+ err := rpcclient.Call("system.getDetails", []interface{}{token,
systemID}, &result)
-+ return result, err
-+}
-+
-+// Get list of groups a system belongs to
-+func listSystemGroups(rpcclient *xmlrpc.Client, token string, systemID int)
([]groupDetail, error) {
-+ var result []groupDetail
-+ err := rpcclient.Call("system.listGroups", []interface{}{token,
systemID}, &result)
-+ return result, err
++// Get the system groups information of monitored clients
++func getSystemGroupsInfoOfMonitoredClients(rpcclient *xmlrpc.Client, token
string) (map[int][]systemGroupID, error) {
++ var systemGroupsInfos []struct {
++ SystemID int `xmlrpc:"id"`
++ SystemGroups []systemGroupID `xmlrpc:"system_groups"`
++ }
++ err :=
rpcclient.Call("system.listSystemGroupsForSystemsWithEntitlement",
[]interface{}{token, monitoringEntitlementLabel}, &systemGroupsInfos)
++ if err != nil {
++ return nil, err
++ }
++ result := make(map[int][]systemGroupID)
++ for _, systemGroupsInfo := range systemGroupsInfos {
++ result[systemGroupsInfo.SystemID] =
systemGroupsInfo.SystemGroups
++ }
++ return result, nil
+}
+
+// GetSystemNetworkInfo lists client FQDNs
-+func getSystemNetworkInfo(rpcclient *xmlrpc.Client, token string, systemID
int) (networkInfo, error) {
-+ var result networkInfo
-+ err := rpcclient.Call("system.getNetwork", []interface{}{token,
systemID}, &result)
-+ return result, err
++func getNetworkInformationForSystems(rpcclient *xmlrpc.Client, token string,
systemIDs []int) (map[int]networkInfo, error) {
++ var networkInfos []networkInfo
++ err := rpcclient.Call("system.getNetworkForSystems",
[]interface{}{token, systemIDs}, &networkInfos)
++ if err != nil {
++ return nil, err
++ }
++ result := make(map[int]networkInfo)
++ for _, networkInfo := range networkInfos {
++ result[networkInfo.SystemID] = networkInfo
++ }
++ return result, nil
+}
+
+// Get formula data for a given system
-+func getSystemFormulaData(rpcclient *xmlrpc.Client, token string, systemID
int, formulaName string) (map[string]exporterConfig, error) {
-+ var result map[string]exporterConfig
-+ err := rpcclient.Call("formula.getSystemFormulaData",
[]interface{}{token, systemID, formulaName}, &result)
-+ return result, err
-+}
-+
-+// Get formula data for a given group
-+func getGroupFormulaData(rpcclient *xmlrpc.Client, token string, groupID int,
formulaName string) (map[string]exporterConfig, error) {
-+ var result map[string]exporterConfig
-+ err := rpcclient.Call("formula.getGroupFormulaData",
[]interface{}{token, groupID, formulaName}, &result)
-+ return result, err
++func getExporterDataForSystems(rpcclient *xmlrpc.Client, token string,
systemIDs []int) (map[int]map[string]exporterConfig, error) {
++ var combinedFormulaDatas []struct {
++ SystemID int `xmlrpc:"system_id"`
++ ExporterConfigs map[string]exporterConfig
`xmlrpc:"formula_values"`
++ }
++ err := rpcclient.Call("formula.getCombinedFormulaDataByServerIds",
[]interface{}{token, prometheusExporterFormulaName, systemIDs},
&combinedFormulaDatas)
++ if err != nil {
++ return nil, err
++ }
++ result := make(map[int]map[string]exporterConfig)
++ for _, combinedFormulaData := range combinedFormulaDatas {
++ result[combinedFormulaData.SystemID] =
combinedFormulaData.ExporterConfigs
++ }
++ return result, nil
+}
+
+// Get exporter port configuration from Formula
@@ -230,17 +227,6 @@
+ return tokens[1], nil
+}
+
-+// Take a current formula structure and override values if the new config is
set
-+// Used for calculating final formula values when using groups
-+func getCombinedFormula(combined map[string]exporterConfig, new
map[string]exporterConfig) map[string]exporterConfig {
-+ for k, v := range new {
-+ if v.Enabled {
-+ combined[k] = v
-+ }
-+ }
-+ return combined
-+}
-+
+// NewDiscovery returns a new file discovery for the given paths.
+func NewDiscovery(conf *SDConfig, logger log.Logger) *Discovery {
+ d := &Discovery{
@@ -257,138 +243,135 @@
+ return d
+}
+
-+func (d *Discovery) refresh(ctx context.Context) ([]*targetgroup.Group,
error) {
++func (d *Discovery) getTargetsForSystem(systemID int, systemGroupsIDs
[]systemGroupID, networkInfo networkInfo, combinedFormulaData
map[string]exporterConfig) []model.LabelSet {
++ labelSets := make([]model.LabelSet, 0)
++ for exporter, exporterConfig := range combinedFormulaData {
++ if exporterConfig.Enabled {
++ port, err :=
extractPortFromFormulaData(exporterConfig.Args)
++ if err == nil {
++ targets := model.LabelSet{}
++ addr := fmt.Sprintf("%s:%s", networkInfo.IP,
port)
++ targets[model.AddressLabel] =
model.LabelValue(addr)
++ targets["exporter"] = model.LabelValue(exporter)
++ targets["hostname"] =
model.LabelValue(networkInfo.Hostname)
++
++ managedGroupNames := make([]string, 0,
len(systemGroupsIDs))
++ for _, systemGroupInfo := range systemGroupsIDs
{
++ managedGroupNames =
append(managedGroupNames, systemGroupInfo.GroupName)
++ }
+
-+ config := d.sdConfig
-+ apiURL := config.Host + "/rpc/api"
++ if len(managedGroupNames) == 0 {
++ managedGroupNames = []string{"No group"}
++ }
+
-+ // Check if the URL is valid and create rpc client
-+ _, err := url.ParseRequestURI(apiURL)
-+ if err != nil {
-+ return nil, errors.Wrap(err, "Uyuni Server URL is not valid")
++ targets["groups"] =
model.LabelValue(strings.Join(managedGroupNames, ","))
++ labelSets = append(labelSets, targets)
++
++ } else {
++ level.Error(d.logger).Log("msg", "Invalid
exporter port", "clientId", systemID, "err", err)
++ }
++ }
++ }
++ return labelSets
++}
++
++func (d *Discovery) getTargetsForSystems(rpcClient *xmlrpc.Client, token
string, systemGroupIDsBySystemID map[int][]systemGroupID) ([]model.LabelSet,
error) {
++ result := make([]model.LabelSet, 0)
++
++ systemIDs := make([]int, 0, len(systemGroupIDsBySystemID))
++ for systemID := range systemGroupIDsBySystemID {
++ systemIDs = append(systemIDs, systemID)
+ }
-+ rpc, _ := xmlrpc.NewClient(apiURL, nil)
-+ tg := &targetgroup.Group{Source: config.Host}
+
-+ // Login into Uyuni API and get auth token
-+ token, err := login(rpc, config.User, config.Pass)
++ combinedFormulaDataBySystemID, err :=
getExporterDataForSystems(rpcClient, token, systemIDs)
+ if err != nil {
-+ return nil, errors.Wrap(err, "Unable to login to Uyuni API")
++ return nil, errors.Wrap(err, "Unable to get systems combined
formula data")
+ }
-+ // Get list of managed clients from Uyuni API
-+ clientList, err := listSystems(rpc, token)
++ networkInfoBySystemID, err :=
getNetworkInformationForSystems(rpcClient, token, systemIDs)
+ if err != nil {
-+ return nil, errors.Wrap(err, "Unable to get list of systems")
++ return nil, errors.Wrap(err, "Unable to get the systems network
information")
+ }
+
-+ // Iterate list of clients
-+ if len(clientList) == 0 {
-+ fmt.Printf("\tFound 0 systems.\n")
-+ } else {
-+ startTime := time.Now()
-+ var wg sync.WaitGroup
-+ wg.Add(len(clientList))
-+
-+ for _, cl := range clientList {
-+
-+ go func(client clientRef) {
-+ defer wg.Done()
-+ rpcclient, _ := xmlrpc.NewClient(apiURL, nil)
-+ netInfo := networkInfo{}
-+ formulas := map[string]exporterConfig{}
-+ groups := []groupDetail{}
++ for _, systemID := range systemIDs {
++ targets := d.getTargetsForSystem(systemID,
systemGroupIDsBySystemID[systemID], networkInfoBySystemID[systemID],
combinedFormulaDataBySystemID[systemID])
++ result = append(result, targets...)
+
-+ // Get the system details
-+ details, err := getSystemDetails(rpcclient,
token, client.ID)
++ // Log debug information
++ if networkInfoBySystemID[systemID].IP != "" {
++ level.Debug(d.logger).Log("msg", "Found monitored
system",
++ "Host",
networkInfoBySystemID[systemID].Hostname,
++ "Network", fmt.Sprintf("%+v",
networkInfoBySystemID[systemID]),
++ "Groups", fmt.Sprintf("%+v",
systemGroupIDsBySystemID[systemID]),
++ "Formulas", fmt.Sprintf("%+v",
combinedFormulaDataBySystemID[systemID]))
++ }
++ }
++ return result, nil
++}
+
-+ if err != nil {
-+ level.Error(d.logger).Log("msg",
"Unable to get system details", "clientId", client.ID, "err", err)
-+ return
-+ }
-+ jsonDetails, _ := json.Marshal(details)
-+ level.Debug(d.logger).Log("msg", "System
details", "details", jsonDetails)
++func (d *Discovery) refresh(ctx context.Context) ([]*targetgroup.Group,
error) {
++ config := d.sdConfig
++ apiURL := config.Host + uyuniXMLRPCAPIPath
+
-+ // Check if system is monitoring entitled
-+ for _, v := range details.Entitlements {
-+ if v == "monitoring_entitled" { //
golang has no native method to check if an element is part of a slice
-+
-+ // Get network details
-+ netInfo, err =
getSystemNetworkInfo(rpcclient, token, client.ID)
-+ if err != nil {
-+
level.Error(d.logger).Log("msg", "getSystemNetworkInfo failed", "clientId",
client.ID, "err", err)
-+ return
-+ }
++ startTime := time.Now()
+
-+ // Get list of groups this
system is assigned to
-+ candidateGroups, err :=
listSystemGroups(rpcclient, token, client.ID)
-+ if err != nil {
-+
level.Error(d.logger).Log("msg", "listSystemGroups failed", "clientId",
client.ID, "err", err)
-+ return
-+ }
-+ groups := []string{}
-+ for _, g := range
candidateGroups {
-+ // get list of group
formulas
-+ // TODO: Put the
resulting data on a map so that we do not have to repeat the call below for
every system
-+ if g.Subscribed == 1 {
-+ groupFormulas,
err := getGroupFormulaData(rpcclient, token, g.ID, "prometheus-exporters")
-+ if err != nil {
-+
level.Error(d.logger).Log("msg", "getGroupFormulaData failed", "groupId",
client.ID, "err", err)
-+ return
-+ }
-+ formulas =
getCombinedFormula(formulas, groupFormulas)
-+ // replace
spaces with dashes on all group names
-+ groups =
append(groups, strings.ToLower(strings.ReplaceAll(g.SystemGroupName, " ", "-")))
-+ }
-+ }
++ // Check if the URL is valid and create rpc client
++ _, err := url.ParseRequestURI(apiURL)
++ if err != nil {
++ return nil, errors.Wrap(err, "Uyuni Server URL is not valid")
++ }
+
-+ // Get system formula list
-+ systemFormulas, err :=
getSystemFormulaData(rpcclient, token, client.ID, "prometheus-exporters")
-+ if err != nil {
-+
level.Error(d.logger).Log("msg", "getSystemFormulaData failed", "clientId",
client.ID, "err", err)
-+ return
-+ }
-+ formulas =
getCombinedFormula(formulas, systemFormulas)
++ rpcClient, _ := xmlrpc.NewClient(apiURL, nil)
+
-+ // Iterate list of formulas and
check for enabled exporters
-+ for k, v := range formulas {
-+ if v.Enabled {
-+ port, err :=
extractPortFromFormulaData(v.Args)
-+ if err != nil {
-+
level.Error(d.logger).Log("msg", "Invalid exporter port", "clientId",
client.ID, "err", err)
-+ return
-+ }
-+ targets :=
model.LabelSet{}
-+ addr :=
fmt.Sprintf("%s:%s", netInfo.IP, port)
-+
targets[model.AddressLabel] = model.LabelValue(addr)
-+
targets["exporter"] = model.LabelValue(k)
-+
targets["hostname"] = model.LabelValue(details.Hostname)
-+
targets["groups"] = model.LabelValue(strings.Join(groups, ","))
-+ for _, g :=
range groups {
-+ gname
:= fmt.Sprintf("grp:%s", g)
-+
targets[model.LabelName(gname)] = model.LabelValue("active")
-+ }
-+ tg.Targets =
append(tg.Targets, targets)
-+ }
-+ }
-+ }
-+ }
-+ // Log debug information
-+ if netInfo.IP != "" {
-+ level.Info(d.logger).Log("msg", "Found
monitored system", "Host", details.Hostname,
-+ "Entitlements",
fmt.Sprintf("%+v", details.Entitlements),
-+ "Network", fmt.Sprintf("%+v",
netInfo), "Groups",
-+ fmt.Sprintf("%+v", groups),
"Formulas", fmt.Sprintf("%+v", formulas))
-+ }
-+ rpcclient.Close()
-+ }(cl)
++ token, err := login(rpcClient, config.User, config.Pass)
++ if err != nil {
++ return nil, errors.Wrap(err, "Unable to login to Uyuni API")
++ }
++ systemGroupIDsBySystemID, err :=
getSystemGroupsInfoOfMonitoredClients(rpcClient, token)
++ if err != nil {
++ return nil, errors.Wrap(err, "Unable to get the managed system
groups information of monitored clients")
++ }
++
++ targets := make([]model.LabelSet, 0)
++ if len(systemGroupIDsBySystemID) > 0 {
++ targetsForSystems, err := d.getTargetsForSystems(rpcClient,
token, systemGroupIDsBySystemID)
++ if err != nil {
++ return nil, err
+ }
-+ wg.Wait()
++ targets = append(targets, targetsForSystems...)
+ level.Info(d.logger).Log("msg", "Total discovery time", "time",
time.Since(startTime))
++ } else {
++ fmt.Printf("\tFound 0 systems.\n")
+ }
-+ logout(rpc, token)
-+ rpc.Close()
-+ return []*targetgroup.Group{tg}, nil
-+}
++
++ logout(rpcClient, token)
++ rpcClient.Close()
++ return []*targetgroup.Group{&targetgroup.Group{Targets: targets,
Source: config.Host}}, nil
++}
+diff --git a/go.mod b/go.mod
+index 0b5a585..5a95ffb 100644
+--- a/go.mod
++++ b/go.mod
+@@ -41,6 +41,7 @@ require (
+ github.com/jpillora/backoff v1.0.0 // indirect
+ github.com/json-iterator/go v1.1.9
+ github.com/julienschmidt/httprouter v1.3.0 // indirect
++ github.com/kolo/xmlrpc v0.0.0-20200310150728-e0350524596b
+ github.com/mattn/go-colorable v0.1.6 // indirect
+ github.com/miekg/dns v1.1.29
+ github.com/mitchellh/mapstructure v1.2.2 // indirect
+diff --git a/go.sum b/go.sum
+index 7941bbe..9f31b87 100644
+--- a/go.sum
++++ b/go.sum
+@@ -505,6 +505,8 @@ github.com/klauspost/compress v1.9.5/go.mod
h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0
+ github.com/klauspost/cpuid v0.0.0-20170728055534-ae7887de9fa5/go.mod
h1:Pj4uuM528wm8OyEC2QMXAi2YiTZ96dNQPGgoMS4s3ek=
+ github.com/klauspost/crc32 v0.0.0-20161016154125-cb6bfca970f6/go.mod
h1:+ZoRqAPRLkC4NPOvfYeR5KNOrY6TD+/sAC3HXPZgDYg=
+ github.com/klauspost/pgzip v1.0.2-0.20170402124221-0bf5dcad4ada/go.mod
h1:Ch1tH69qFZu15pkjo5kYi6mth2Zzwzt50oCQKQE9RUs=
++github.com/kolo/xmlrpc v0.0.0-20200310150728-e0350524596b
h1:DzHy0GlWeF0KAglaTMY7Q+khIFoG8toHP+wLFBVBQJc=
++github.com/kolo/xmlrpc v0.0.0-20200310150728-e0350524596b/go.mod
h1:o03bZfuBwAXHetKXuInt4S7omeXUu62/A845kiycsSQ=
+ github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod
h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+ github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod
h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+ github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515
h1:T+h1c/A9Gawja4Y9mFVWj2vyii2bbUNDw3kt9VxK2EY=
diff --git a/vendor/github.com/kolo/xmlrpc/LICENSE
b/vendor/github.com/kolo/xmlrpc/LICENSE
new file mode 100644
index 00000000..8103dd13
@@ -1801,7 +1784,7 @@
+<?xml version="1.0" encoding="cp1251" ?>
+<methodResponse>
+ <params>
-+ <param><value><string>�.�. ������� - ����� �
���</string></value></param>
++ <param><value><string>�.�. ������� - ����� �
���</string></value></param>
+ </params>
+</methodResponse>
\ No newline at end of file
++++++ _service ++++++
--- /var/tmp/diff_new_pack.47jWuP/_old 2020-06-11 10:00:27.834342345 +0200
+++ /var/tmp/diff_new_pack.47jWuP/_new 2020-06-11 10:00:27.834342345 +0200
@@ -3,8 +3,8 @@
<param name="url">https://github.com/prometheus/prometheus.git</param>
<param name="scm">git</param>
<param name="exclude">.git</param>
- <param name="versionformat">2.11.1</param>
- <param name="revision">v2.11.1</param>
+ <param name="versionformat">2.18.0</param>
+ <param name="revision">v2.18.0</param>
</service>
<service name="recompress" mode="disabled">
<param name="file">prometheus-*.tar</param>
++++++ prometheus-2.11.1.tar.xz -> prometheus-2.18.0.tar.xz ++++++
/work/SRC/openSUSE:Factory/golang-github-prometheus-prometheus/prometheus-2.11.1.tar.xz
/work/SRC/openSUSE:Factory/.golang-github-prometheus-prometheus.new.3606/prometheus-2.18.0.tar.xz
differ: char 26, line 1