You cannot sent images over the mailing list. They get automatically
removed.
On 12/6/16 11:55 PM, 陈超 wrote:
> Hi kafka developer,
>
>
>
> I have a kafka cluster with 3 node. And it have 3 topic now. We
> have not many data into the kafka topic now. But the node sync data to
> each
You cannot send images over the mailing list... they get automatically
removed.
On 12/6/16 11:15 PM, paradixrain wrote:
> Dear kafka,
> I think there is an error in the document, is that right?
>
>
> Here's what I did:
> Step 1:
> open a producer
> ./kafka-console-producer.sh --broker-list
are you setting the group.id property to be the same on both consumers?
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example
-hans
/**
* Hans Jespersen, Principal Systems Engineer, Confluent Inc.
* h...@confluent.io (650)924-2670
*/
On Wed, Dec 7, 2016 at 12:36 PM,
Have you a set them to the same consumer group ID? That's what "identities"
a consumer group.
On Thu, Dec 8, 2016 at 2:06 AM, Justin Smith wrote:
> I read this paragraph under Kafka as a Messaging System.
>
>
>
> “The consumer group concept in Kafka generalizes these two
Hi kafka developer,
I have a kafka cluster with 3 node. And it have 3 topic now. We
have not many data into the kafka topic now. But the node sync data to each
other node bandwidth is up to 4 Mb/s. I don’t know why so high. This is the
picture below:
Iftop info:
说明:
Dear kafka,
I think there is an error in the document, is that right?
Here's what I did:
Step 1:
open a producer
./kafka-console-producer.sh --broker-list localhost:9092 --topic test
Step 2:
open a consumer
./kafka-console-consumer.sh --zookeeper localhost:9092 --topic test
--from-beginning
Hey guys,
I'm having a hell of a time here. I've worked for days trying to get
this joining pipeline working. I thought I had it working last week,
but my jubilation was premature. The point was to take data in from
five different topics and merge them together to obtain one enriched
I read this paragraph under Kafka as a Messaging System.
"The consumer group concept in Kafka generalizes these two concepts. As with a
queue the consumer group allows you to divide up processing over a collection
of processes (the members of the consumer group). As with publish-subscribe,
One notification is that in this bug-fix release we include artifacts built
from Scala 2.12.1 as well, as a pre-alpha product for the Scala community
to try and test it out (it is built with Java8 while all other artifacts
are built with Java7). We hope to formally add the scala 2.12 support in
Hello Kafka users, developers and client-developers,
This is the first candidate for the release of Apache Kafka 0.10.1.1. This is
a bug fix release and it includes fixes and improvements from 27 JIRAs. See
the release notes for more details:
Two ideas:
you could use a new consumer group id and set the TopicConfig property
"auto.offset.reset" to "smallest". Consumers in the new group will read
from the beginning on all partitions.
Alternatively, as an example of how to set the offset explicitly, you can
modify the AdvancedConsumer
The bug I was referring to was only in trunk for just a while. Thus, your
issue must be related to something else, even though the response statuses
are similar.
Let me know if you want to share a bigger and more detailed (DEBUG level at
least) snapshot of the parts of the logs that might be
I'm attempting to set the offset for a RdKafka-dotnet consumer in order to
re-read the topic, but I've not seen any documentation or examples that do
this. I saw a reference for librdkafka that seems to show a set_offset method
off the TopicPartition class and an Offset property on the
Hello Konstantine,
Thanks for your reply.
I am using Confluent 3.0.1 installed on my machine and our cluster. However,
our AWS cluster has Confluent 3.1.1 installed so I will test with 3.1.1 client
and cluster and see if this resolves the issue. Additionally, I’ll use the
debug levels if
The maintainer of librdkafka was able to reproduce the latency. He thinks
it may be some sort of batching algorithm similar to Nagle inside OpenSSL.
Status of the issue is maintained at:
https://github.com/edenhill/librdkafka/issues/920
Thanks to all on this mailing list for your help in
Is auto.offset.reset honored just the first time consumer starts and
polling? In other words everytime consumer starts does it start from the
beginning even if it has already read those messages?
On Wed, Dec 7, 2016 at 1:43 AM, Harald Kirsch
wrote:
> Have you defined
Hey guys,
I'm having a hell of a time here. I've worked for days trying to get
this joining pipeline working. I thought I had it working last week,
but my jubilation was premature. The point was to take data in from
five different topics and merge them together to obtain one enriched
I'm not sure why you observed that aggregation works ok if String typed key
is used. I think I agree with Radek that the problem comes from the value,
and here is my understanding:
1. The source stream read from the topic named "rtDetailLines" is in type
2. After the map
From: Tuan Dang
Sent: Wednesday, December 7, 2016 10:00 AM
To: users@kafka.apache.org
Subject: reacting to a truststore change
Hello all,
I'm working my way through Kafka 0.9 SSL/TLS authentication.
If I make a change to the
Hello all,
I'm working my way through Kafka 0.9 SSL/TLS authentication.
If I make a change to the truststore, either adding or removing a
certificate, will Kafka automatically pick up the changes or would I need
to restart ?
My main issue is how to unauthorize a producer. I've seen
Hi Jon,
The "/windowed" namel in the web server example is just an example name, it
could have been called differently too. It is built however on the Interactive
Query APIs which are fixed. In the example code I mentioned we see the
implementation as shown below. Again, the web server code is
Im having trouble finding documentation on this new feature. Can you point
me to anything?
Specifically on how to get available "from/to" values but more generally on
how to use the "windowed" query.
On Wed, Dec 7, 2016 at 1:25 AM, Eno Thereska wrote:
> Hi Jon,
>
> This
Note that Sumant has been working on a KIP proposal to make the producer
timeout behaviour more intuitive:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-91+Provide+Intuitive+User+Timeouts+in+The+Producer
Ismael
On Wed, Dec 7, 2016 at 9:42 AM, Rajini Sivaram
Hi Asaf,
That PR is for the backport to 0.9.0.x, the original change was merged to
trunk and is in 0.10.x.x.
Ismael
On Tue, Dec 6, 2016 at 10:10 AM, Asaf Mesika wrote:
> Vatsal:
>
> I don't think they merged the fix for this bug (retries doesn't work) in
> 0.9.x to
Have you defined
auto.offset.reset: earliest
or otherwise made sure (KafkaConsumer.position()) that the consumer does
not just wait for *new* messages to arrive?
Harald.
On 06.12.2016 20:11, Mohit Anchlia wrote:
I see this message in the logs:
[2016-12-06 13:54:16,586] INFO
If you just want to test retries, you could restart Kafka while the
producer is running and you should see the producer retry while Kafka is
down/leader is being elected after Kafka restarts. If you specifically want
a TimeoutException to trigger all retries, I am not sure how you can. I
would
With 'restart' I mean a 'let it crash' setup (as promoted by Erlang and
Akka, e.g.
http://doc.akka.io/docs/akka/snapshot/intro/what-is-akka.html). The
consumer gets in trouble due to an OOM or a runaway computation or
whatever that we want to preempt somehow. It crashes or gets killed
Hi Jon,
This will be a windowed store. Have a look at the Jetty-server bits for
windowedByKey:
"/windowed/{storeName}/{key}/{from}/{to}"
Thanks
Eno
> On 6 Dec 2016, at 23:33, Jon Yeargers wrote:
>
> I copied out some of the WordCountInteractive
>
28 matches
Mail list logo