> It still helps a lot, because you can have many reporting programs, each 
> talking to different processes on the server, and those processes are able 
> to  get the transactions done very quickly, with multiple write daemons and 
> journaling daemons doing the actual I/O from the shared buffer pool.  (and 
> yes, this has been used precisely for collecting metrics, on a very large 
> scale, across all of Germany in one case, IIRC)
>

I've got curious about this use case, are you permitted to share more 
details about it? Were they technical or business metrics? What volume of 
data passed through the system and what were parameters or the server? 

In fact, my initial question in this thread was purposely very broad, 
because I was looking for any related projects. Now I see 2 different kinds 
of metric collection systems. One is for reporting and monitoring purposes, 
e.g. measuring operation run time, memory usage, number of simultaneously 
connected users, etc. Normally, in such systems metrics are not stored on 
collector side for a long time, but instead are aggregated and sent to 
something like graphite almost immediately. Also it's ok to lose some part 
of this information or delete old metrics. Another kind is for collecting 
and further analysis of business metrics. And in this case we need reliable 
storage first of all. 

My primary goal in this little project is to create system for first kind 
of metrics, but if there's interest in second kind, I'll be glad to spend 
some time on making something for broader range of users. 

Reply via email to