If you don't specify the partition, and do have a key, then the default
behaviour is to use a hash on the key to determine the partition. This to
make sure the messages with the same key and up on the same partition. This
helps to ensure ordering relative to the key/partition. Also when using
Oleg, I believe 0.9 producer gave you the control "max.block.ms" now
On Wed, Mar 30, 2016 at 5:31 AM, Oleg Zhurakousky <
ozhurakou...@hortonworks.com> wrote:
> I'll buy both 'back pressure' and 'block' argument, but what does it have
> to do with the Future? Isn't that the main point of the
The documentation says that the only purpose of the "key" is to decide the
partition the data ends up in. The consumer doesn't decide that. I'll have
to look at the documentation but I'm not entirely sure if the consumers
have access to this key. The producer does. You can override the default
I really need some help on this.
I am able to publish new messages to the topics using the rest proxy.
The issue is that when I query the rest proxy for that topic, even though there
is data present, I get "{}" (empty results).
I will get this empty results for some non-deterministic period of
Hi, list.
We're working on a project that uses Kafka and we notice that for every
message we have a key (or null). I searched for more info about the key
itself and the documentation says that it is only used to decide the
partition where the message is placed.
Is there a problem if we use keys
I don¹t follow. By having two consumer objects on C3, you can consume a
portion of the messages from both T1 and T2.
So, Group1(C1,C2,C3) is subscribed to topic Topic T1. Group2(C3,C4) is
subscribed to topic T2
If you want C3 to consume all messages on T1 and T2, it would need to be
in a
-- Forwarded message --
> From: Jeff Klukas
> To: users@kafka.apache.org
> Cc:
> Date: Wed, 30 Mar 2016 11:14:53 -0400
> Subject: KStream-KTable join with the KTable given a "head start"
> I have a KStream that I want to enrich with some values from a lookup
>
Hi Jeff,
This is a common case of stream-table join, in that the joining results
depending on the arrival ordering from these two sources.
In Kafka Streams you can try to "synchronize" multiple input streams
through the "TimestampExtractor" interface, which is used to assign a
timestamp to each
This sounds like a square peg in a round hole sort of solution. That said, you
might want to look at the work being done with kafka-streams to expose a topic
as a table.
> On Mar 30, 2016, at 3:23 PM, Michael D. Spence wrote:
>
>
> Any advice on using Kafka to store the
Any advice on using Kafka to store the actual messages?
On 3/22/2016 6:32 PM, Michael D. Spence wrote:
We have to construct a messaging application that functions as a
switch between other applications in the enterprise. Since our switch
need only have a few days worth of messages, we are
Hi Oleg,
The binary protocol is compatible, so you don't have to worry about 0.9
consumers not working with 0.10. But the API changes to the Java client are
not binary compatible (you will have to recompile your code to use the 0.10
version of the client). Here is the KIP which details the
Hi,
I had a Kafka cluster with three brokers.
I killed two of them by mistake.
I restarted them with the same server.properties config files that was used
in running them the first time, but it is not functioning correctly.
By this I mean when I run bin/kafka-console-consumer.sh --zookeeper
Jason
Are those API changes you mentioned binary compatible with previous release?
Cheers
Oleg
> On Mar 30, 2016, at 12:03 PM, Jason Gustafson wrote:
>
> Hi Prabhakar,
>
> We fixed a couple critical bugs in the 0.9.0.1 release, so you should
> definitely make sure to use
Hi Prabhakar,
We fixed a couple critical bugs in the 0.9.0.1 release, so you should
definitely make sure to use that version if you want to try it out. Since
then, we've mostly been tweaking the behavior for some edge cases and
trying to improve messaging. I'd recommend giving it a shot. The
I've also asked on stackoverflow, in case you prefer to answer there:
http://stackoverflow.com/questions/36313470/consumerrebalancefailedexception-with-the-kafka-console-consumer
Thanks,
Filipe
On Wed, Mar 30, 2016 at 4:03 PM, Filipe Correia
wrote:
> Hi there,
>
>
I have a KStream that I want to enrich with some values from a lookup
table. When a new key enters the KStream, there's likely to be a
corresponding entry arriving on the KStream at the same time, so we end up
with a race condition. If the KTable record arrives first, then its value
is available
Hi there,
I've just installed kafka 0.9.0.1, and I'm getting the following error when
launching the kafka-console-consumer:
$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic
myrandomtesttopic --from-beginning
[2016-03-30 15:46:17,568] ERROR Unknown error when running consumer:
I would like to add a little more to this context, the problem is not hard
to reproduce.
If you are using
- auto commit
- heartbeat time = commit time
- more than one consumer
It seems that is always failing to send the heart beat. Changing the values
for the heartbeat and commit to be
I'll buy both 'back pressure' and 'block' argument, but what does it have to do
with the Future? Isn't that the main point of the Future - a reference to an
invocation that may or may not occur some time in the future? Isn't that the
purpose of the Future.get(..) to give user a choice and
Hi,
I am new to Kafka ,I have a doubt in Kafka batching.
In Producer 9.0 how batching works if I Set batchsize 10mb then all
messages in 10 mb batch go to one offset in Kafka broker?
In new high level consumer how fetchsize work? It's like per message
fetchsize inside one offset or whole
Hi,
Can we set Kafka broker size?
Suppose my system space 500gb.can I set Kafka broker with 100gb in my
system?
Thanks
Manish
anyone help?
> 在 2016年3月29日,18:57,jinhong lu 写道:
>
>
>
> Hi, I found this log in my server.log.
>
> The offset of replication is larger than leader's, so the replication data
> will delete, and then copy the the data from leader.
> But when copying, the cluster is very
Was wondering the same. From what I can tell it shows unknown when no
committed offset is recorded for that partition by the consumer.
On Mon, Mar 28, 2016 at 12:25 PM, craig w wrote:
> When using the ConsumerGroupCommand to describe a group (using
> new-consumer, 0.9.0.1)
23 matches
Mail list logo