Are you using the new java consumer? What method are you using to commit
offsets?
-Dave
-Original Message-
From: Ghosh, Achintya (Contractor) [mailto:achintya_gh...@comcast.com]
Sent: Tuesday, September 20, 2016 8:56 AM
To: users@kafka.apache.org
Cc: d...@kafka.apache.org
Subject:
This is probably better posted on the flume dev or users list (
d...@flume.apache.org and u...@flume.apache.org). I suspect you'll get a
better response there (or even the Cloudera community forums as there is
likely some Kite SDK experience there)
I think what you are saying is that you have a
Hi there,
I see a lot of same offset value kafka consumer receives hence it creates a lot
of duplicate messages. What could be the reason and how we can solve this issue?
Thanks
Achintya
Thanks for sharing Radek, great article.
Michael
> On 17 Sep 2016, at 21:13, Radoslaw Gruchalski wrote:
>
> Please read this article:
> https://engineering.linkedin.com/distributed-systems/log-what-every-software-engineer-should-know-about-real-time-datas-unifying
>
> –
I am using the Java producer client, with Callback:
http://kafka.apache.org/082/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html#send(org.apache.kafka.clients.producer.ProducerRecord,%20org.apache.kafka.clients.producer.Callback)
(I am not using the returned Future.)
Is there any way
One possible solution might be to use parkeeper, which uses consul as
the backend and exposes a facade that looks like zookeeper:
https://github.com/glerchundi/parkeeper
The project doesn't seem very active though and it is unclear whether it
supports all the features that are used by kafka.
> not aware of any shortfall with zookeeper so perhaps you can suggest
advantages for Consul vs Zookeeper?
Maybe it's somewhat off-topic here, but Consul has several advantages over
Zookeeper:
* It's IMHO easier to maintain, add leader nodes, remove leader nodes etc.
* Has high level service
I'm using the version 10.0
De : Hamza HACHANI
Envoyé : lundi 19 septembre 2016 19:20:23
À : users@kafka.apache.org
Objet : RE: Error kafka-stream method punctuate in context.forward()
Hi Guozhang,
Here is the code for the two concerned classes
If this can
Hi Guozhang,
Here is the code for the two concerned classes
If this can help i fugure out that the instances of
ProcessorStatsByHourSupplier and ProcessorStatsByMinuteSupplier which are
returned are the same.
I think this is the problem. I tried to fix it but i was not to do it.
Thanks
Hi,
I'm trying to use the Confluent JDBC Sink as Sri is doing but without a schema.
I do not want to write "schema" + "payload" for each record as my records are
all for the same table and the schema is not going to change (this is a very
simple project)
Thanks
Enrico
Il giorno lun,
10 matches
Mail list logo