On Tue, 04 Nov 2014 11:22:00 -0800
Rob Miller <[email protected]> wrote:

> Getting caught up on backlog.... sorry for the delay.
> 
> On 10/18/2014 01:06 PM, Dieter Plaetinck wrote:
> >
> > So I plan to do some experimenting with heka, but meanwhile I'ld like to 
> > hear from you.
> >
> > 1) does anyone have some empirical numbers on heka-as-statsd performance? 
> > in particular, when does it start dropping udp packets? what cpu % do you 
> > see (in particular can you spot spikes), and how many msg/s is it doing?
> > 2) have you also noticed reading messages from the udp connection going 
> > slowly/maxing out cpu? i see heka calls read instead of readFromUDP like I 
> > do.  I should look into that..
> Unfortunately, I'm afraid we haven't done any of the detailed analysis you're 
> describing of our statsd performance. I'd love to see it happen though, would 
> you be interested in giving it a try?


interested sure, having time, we'll see :p

> 
> > 3) does heka leverage multi-core properly for the statsd workload 
> > (receiving the packets, and doing the aggregations)?
> Not sure what you mean by "properly", but it is true that the StatsdInput, 
> which handles the UDP input and the parsing of the statsd string value, runs 
> in a different goroutine than the StatAccumInput, which handles the 
> aggregation, so those two tasks can run on separate cores. It would also be 
> possible to use a regular UDP input, implement the statsd parsing in a 
> decoder, and feed the stats into a StatAccumInput, which would mean the whole 
> job would be spread across three different goroutines. But some testing would 
> need to be done to see whether the benefits of spreading those jobs across 
> multiple cores outweigh the drawbacks of the channel synchronization that is 
> introduced with the use of more goroutines.

i was thinking along the lines of spreading the workload of reading packets 
and/or computing the aggregate stats (timer,counter, ..) for all metric keys 
(stataccum) across different cores, because in my experience those parts are 
most cpu intense and could (in theory) be fairly easily parallelized (esp the 
latter), but anyway no point in going deeper into this without doing benchmarks 
of current code first. 


> Sorry to not be much help,

thanks either way!

_______________________________________________
Heka mailing list
[email protected]
https://mail.mozilla.org/listinfo/heka

Reply via email to