Latest Release of Receiver based Kafka Consumer for Spark Streaming.
Hi , Released latest version of Receiver based Kafka Consumer for Spark Streaming . Available at Spark Packages : https://spark-packages.org/package/dibbhatt/ kafka-spark-consumer Also at github : https://github.com/dibbhatt/kafka-spark-consumer Some key features - Tuned for better performance - Support for Spark 2.x, Kafka 0.10 - Support for Consumer Lag Check ( ConsumerOffsetChecker/ Burrow etc) - WAL less recovery - Better tuned PID Controller having Auto Rate Adjustment with incoming traffic - Support for Custom Message Interceptors Please refer to https://github.com/dibbhatt/kafka-spark-consumer/ blob/master/README.md for more details Regards, Dibyendu
Re: Latest Release of Receiver based Kafka Consumer for Spark Streaming.
Hi, This package is not dependant on any spefic Spark release and can be used with 1.5 . Please refer to "How To" section here : https://spark-packages.org/package/dibbhatt/kafka-spark-consumer Also you will find more information in readme file how to use this package. Regards, Dibyendu On Thu, Aug 25, 2016 at 7:01 PM,wrote: > Hi Dibyendu, > > Looks like it is available in 2.0, we are using older version of spark 1.5 > . Could you please let me know how to use this with older versions. > > Thanks, > Asmath > > Sent from my iPhone > > On Aug 25, 2016, at 6:33 AM, Dibyendu Bhattacharya < > dibyendu.bhattach...@gmail.com> wrote: > > Hi , > > Released latest version of Receiver based Kafka Consumer for Spark > Streaming. > > Receiver is compatible with Kafka versions 0.8.x, 0.9.x and 0.10.x and All > Spark Versions > > Available at Spark Packages : https://spark-packages.org/ > package/dibbhatt/kafka-spark-consumer > > Also at github : https://github.com/dibbhatt/kafka-spark-consumer > > Salient Features : > >- End to End No Data Loss without Write Ahead Log >- ZK Based offset management for both consumed and processed offset >- No dependency on WAL and Checkpoint >- In-built PID Controller for Rate Limiting and Backpressure management >- Custom Message Interceptor > > Please refer to https://github.com/dibbhatt/kafka-spark-consumer/ > blob/master/README.md for more details > > > Regards, > > Dibyendu > > >
Re: Latest Release of Receiver based Kafka Consumer for Spark Streaming.
Hi Dibyendu, Looks like it is available in 2.0, we are using older version of spark 1.5 . Could you please let me know how to use this with older versions. Thanks, Asmath Sent from my iPhone > On Aug 25, 2016, at 6:33 AM, Dibyendu Bhattacharya >wrote: > > Hi , > > Released latest version of Receiver based Kafka Consumer for Spark Streaming. > > Receiver is compatible with Kafka versions 0.8.x, 0.9.x and 0.10.x and All > Spark Versions > > Available at Spark Packages : > https://spark-packages.org/package/dibbhatt/kafka-spark-consumer > > Also at github : https://github.com/dibbhatt/kafka-spark-consumer > > Salient Features : > > End to End No Data Loss without Write Ahead Log > ZK Based offset management for both consumed and processed offset > No dependency on WAL and Checkpoint > In-built PID Controller for Rate Limiting and Backpressure management > Custom Message Interceptor > Please refer to > https://github.com/dibbhatt/kafka-spark-consumer/blob/master/README.md for > more details > > Regards, > Dibyendu >
Latest Release of Receiver based Kafka Consumer for Spark Streaming.
Hi , Released latest version of Receiver based Kafka Consumer for Spark Streaming. Receiver is compatible with Kafka versions 0.8.x, 0.9.x and 0.10.x and All Spark Versions Available at Spark Packages : https://spark-packages.org/package/dibbhatt/kafka-spark-consumer Also at github : https://github.com/dibbhatt/kafka-spark-consumer Salient Features : - End to End No Data Loss without Write Ahead Log - ZK Based offset management for both consumed and processed offset - No dependency on WAL and Checkpoint - In-built PID Controller for Rate Limiting and Backpressure management - Custom Message Interceptor Please refer to https://github.com/dibbhatt/kafka-spark-consumer/blob/master/README.md for more details Regards, Dibyendu