Hi,

I have raised a JIRA ( https://issues.apache.org/jira/browse/SPARK-11045)
to track the discussion but also mailing user group .

This Kafka consumer is around for a while in spark-packages (
http://spark-packages.org/package/dibbhatt/kafka-spark-consumer ) and I see
many started using it , I am now thinking of contributing back to Apache
Spark core project so that it can get better support ,visibility and
adoption.

Few Point about this consumer

*Why this is needed :*

This Consumer is NOT the replacement for existing DirectStream API.
DirectStream solves the problem around "Exactly Once" semantics and "Global
Ordering" of messages . But to achieve this DirectStream comes with an
overhead. The overhead of maintaining the offset externally
, limited parallelism while processing the RDD ( as the RDD partition is
same as Kafka Partition ), and higher latency while processing RDD ( as
messages are fetched when RDD is processed) . There are many who does not
want "Exact Once" and "Global Ordering" of messages, or ordering are
managed in external store ( say HBase),  and want more parallelism and
lower latency in their Streaming channel . At this point Spark does not
have a better fallback option available in terms of Receiver Based API.
Present Receiver Based API use Kafka High Level API which is low
performance and has serious issue. [For this reason Kafka is coming up with
new High Level Consumer API in 0.9]

The Consumer which I implemented is using the Kafka Low Level API which
gives more performance.  This consumer has built in fault tolerant features
for all failures recovery. This Consumer extended the code from Storm Kafka
Spout which is being around for some time and has matured over the years
and has all built in Kafka fault tolerant capabilities. This same Kafka
consumer for spark is being running in various production scenarios
presently and already being adopted by many in the spark community.

*Why Can't we fix existing Receiver based API in Spark* :

This is not possible unless you move to Kafka Low Level API . Or let wait
for Kafka 0.9 where they are re-writing the HighLevel Consumer API and
built another kafka spark consumer for Kafka 0.9 customers .
This approach seems to be not good in my opinion. The Kafka Low Level API
which I used in my consumer ( and also DirectStream uses ) will not going
to be deprecated in near future. So if Kafka Consumer for Spark is using
Low Level API for Receiver based mode, that will make sure all Kafka
Customers who are presently in 0.8.x or who will use 0.9 , benefited form
this same API.

*Concerns around Low Level API Complexity*

Yes, implementing a reliable consumer using Kafka Low Level consumer API is
complex. But same has been done for Strom -Kafka Spout and has been stable
for quite some time. This consumer for Spark is battle tested in various
production loads and gives much better performance than existing Kafka
Consumers for Spark and has better fault tolerant approach than existing
Receiver based mode.

*Why can't this consumer continue to be in Spark-Package ?*

This can be possible. But what I see , many customer who want to fallback
to receiver based mode as they may not need "Exact Once" semantics or
"Global Ordering" , seems to little tentative using a spark-package library
for their critical streaming pipeline. And they are forced to use faulty
and buggy Kafka High Level API based mode. This consumer being part of
Spark project will give much higher adoption and support from community.

*Some Major features around this consumer :*

This consumer is controlling the rate limit by maintaining the constant
Block size where as default rate limiting in other Spark consumers are done
by number of messages. This is an issue when Kafka has messages of
different sizes and there is no deterministic way to know the actual block
sizes and memory utilization if rate control done by number of messages.

This consumer has in-built PID controller which controls the Rate of
consumption again by modifying the block size and consume only that much
amount of messages needed from Kafka . In default Spark consumer , it
fetches chunk of messages and then apply throttle to control the rate.
Which can lead to excess I/O while consuming from Kafka.

You can also refer to Readme file for more details  :
https://github.com/dibbhatt/kafka-spark-consumer/blob/master/README.md

If you are using this consumer or going to use it, you can Vote for this
Jira.

Regards,
Dibyendu

Reply via email to