Sure, just call count on each rdd and track it in your driver however you
want.

If count is called directly on a kafkardd (e.g. createDirectStream, then
foreachRDD before doing any other transformations), it should just be using
the beginning and ending offsets rather than doing any real work.

On Tue, Nov 17, 2015 at 4:48 AM, Chandra Mohan, Ananda Vel Murugan <
ananda.muru...@honeywell.com> wrote:

> HI,
>
>
>
> Is it possible to have a running count of number of kafka messages
> processed in a spark streaming application? Thanks
>
>
>
> Regards,
>
> Anand.C
>

Reply via email to