Spark 2.0 has experemental support of kafka 10.0 and you have to explicitly
define this in your build e.g. spark-streaming-kafka-0-10
On 13 Oct 2016 16:10, "Ben Davison" <ben.davi...@7digital.com> wrote:
> I *think* Spark 2.0.0 has a Kafka 0.8 consumer, which would still use the
> old Zookeeper method.
> The use the new consumer offsets the consumer needs to be atleast Kafka 0.9
> On Thu, Oct 13, 2016 at 1:55 PM, Samy Dindane <s...@dindane.com> wrote:
> > Hi,
> > I use Kafka 0.10 with ZK 3.4.6 and my consumers' offsets aren't stored in
> > the __consumer_offsets topic but in ZK instead.
> > That happens whether I let the consumer commit automatically, or commit
> > manually with enable.auto.commit set to false.
> > Same behavior with `offsets.storage=kafka`, which isn't surprising as
> > configuration value is dropped in 0.10.
> > `kafka-console-consumer --topic __consumer_offsets --zookeeper
> > localhost:/kafka-exp --bootstrap-server localhost:9092` shows nothing
> > my program is committing offsets.
> > Not sure it matters, but I consume the topic using a Spark 2.0.0 app.
> > Is there anything specific I should do to store consumers' offsets in a
> > Kafka topic instead of ZooKeeper?
> > Thank you for you help!
> > Samy
> This email, including attachments, is private and confidential. If you have
> received this email in error please notify the sender and delete it from
> your system. Emails are not secure and may contain viruses. No liability
> can be accepted for viruses that might be transferred by this email or any
> attachment. Any unauthorised copying of this message or unauthorised
> distribution and publication of the information contained herein are
> 7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
> Registered in England and Wales. Registered No. 04843573.