Hi,

At LinkedIn we used an audit module to track the latency / message counts
at each "tier" of the pipeline (for your example it will have the producer
/ local / central / HDFS tiers). Some details can be found on our recent
talk slides (slide 41/42):

http://www.slideshare.net/GuozhangWang/apache-kafka-at-linkedin-43307044

This audit is specific to the usage of Avro as our serialization tool
though, and we are considering ways to get it generalized hence
open-sourced.

Guozhang


On Mon, Jan 5, 2015 at 3:33 PM, Otis Gospodnetic <otis.gospodne...@gmail.com
> wrote:

> Hi,
>
> That sounds a bit like needing a full, cross-app, cross-network
> transaction/call tracing, and not something specific or limited to Kafka,
> doesn't it?
>
> Otis
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>
> On Mon, Jan 5, 2015 at 2:43 PM, Bhavesh Mistry <mistry.p.bhav...@gmail.com
> >
> wrote:
>
> > Hi Kafka Team/Users,
> >
> > We are using Linked-in Kafka data pipe-line end-to-end.
> >
> > Producer(s) ->Local DC Brokers -> MM -> Central brokers -> Camus Job ->
> > HDFS
> >
> > This is working out very well for us, but we need to have visibility of
> > latency at each layer (Local DC Brokers -> MM -> Central brokers -> Camus
> > Job ->  HDFS).  Our events are time-based (time event was produce).  Is
> > there any feature or any audit trail  mentioned at (
> > https://github.com/linkedin/camus/) ?  But, I would like to know
> > in-between
> > latency and time event spent in each hope? So, we do not know where is
> > problem and what t o optimize ?
> >
> > Any of this cover in 0.9.0 or any other version of upcoming Kafka release
> > ?  How might we achive this  latency tracking across all components ?
> >
> >
> > Thanks,
> >
> > Bhavesh
> >
>



-- 
-- Guozhang

Reply via email to