Addshore added a comment.

> Content metrics require long term (non-decaying) storage, operational metrics 
> do not.


Both cases can be covered by configuration

> Of course, if we assume that the need is only to record a small number of 
> data points with a low resolution, none of this matters.


That is my current assumption, backed by having a limited number of things to 
record (an incredibly small amount compared with what is on the current 
graphite instance.

> The added complexity of introducing backups and HDFS


Well, we need not add HDFS. Backups can simply call the API and dump a TSV, 
which I guess could easily be stored in HDFS, or somewhere else. Or just a cron 
backing up actually graphite database. This could even just live on labs.?

I really don't know why we are all expecting graphite to unexpectedly loose our 
data? If you configure it to keep the data for 100 years / 25 years / whatever 
it will. I see no reports of parts of graphites databases vanishing when not 
already configured to do so.


TASK DETAIL
  https://phabricator.wikimedia.org/T117732

EMAIL PREFERENCES
  https://phabricator.wikimedia.org/settings/panel/emailpreferences/

To: Addshore
Cc: Joe, Lydia_Pintscher, fgiunchedi, Christopher, JanZerebecki, Nuria, 
Ottomata, Aklapper, Addshore, StudiesWorld, Wikidata-bugs, aude, Mbch331



_______________________________________________
Wikidata-bugs mailing list
Wikidata-bugs@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata-bugs

Reply via email to