Almost certainly the slowest part of your pipeline there is the ESJsonEncoder. What does the throughput look like if you replace that with a PayloadEncoder?
-r On 09/22/2015 09:33 PM, Andre wrote:
Hi All, I was doing some performance tests around hekad with a simple test KVM VM running ubuntu, 2GB 4 core VM magnetic drives The test pipeline was the traditional vanilla pipeline similar to: TcpInput -> TokenSplitter -> ESJsonEncoder -> FileOutput The sample used was a stream of plain text TCP syslog generated by syslog-ng's loggen tool. Good news is that there's no message loss, bad news is that performance is somewhat lacklustre, while a normal rsyslogd doing similar work would do the same job at significantly higher rates, hekad was kind of stuck around 12K EPS. I started setting maxproc to 4, then tried playing with the poolsize (setting it to large value like 50000) but the only consequence of that was heka consuming more memory, EPS continue more or less the same. Has anyone exceeded this performance with hekad under similar pipelines (i.e. TcpInput) and HW conditions (small VMs)? If yes, mind sharing a bit on how the hekad was configured? Kind regards _______________________________________________ Heka mailing list [email protected] https://mail.mozilla.org/listinfo/heka
_______________________________________________ Heka mailing list [email protected] https://mail.mozilla.org/listinfo/heka

