Re: Question about 'key'

2016-03-30 Thread Gerard Klijs
If you don't specify the partition, and do have a key, then the default behaviour is to use a hash on the key to determine the partition. This to make sure the messages with the same key and up on the same partition. This helps to ensure ordering relative to the key/partition. Also when using

Re: KafkaProducer "send" blocks on first attempt with Kafka server offline

2016-03-30 Thread Steven Wu
Oleg, I believe 0.9 producer gave you the control "max.block.ms" now On Wed, Mar 30, 2016 at 5:31 AM, Oleg Zhurakousky < ozhurakou...@hortonworks.com> wrote: > I'll buy both 'back pressure' and 'block' argument, but what does it have > to do with the Future? Isn't that the main point of the

Re: Question about 'key'

2016-03-30 Thread Sharninder
The documentation says that the only purpose of the "key" is to decide the partition the data ends up in. The consumer doesn't decide that. I'll have to look at the documentation but I'm not entirely sure if the consumers have access to this key. The producer does. You can override the default

RE: Rest Proxy Question

2016-03-30 Thread Heath Ivie
I really need some help on this. I am able to publish new messages to the topics using the rest proxy. The issue is that when I query the rest proxy for that topic, even though there is data present, I get "{}" (empty results). I will get this empty results for some non-deterministic period of

Question about 'key'

2016-03-30 Thread Marcelo Oikawa
Hi, list. We're working on a project that uses Kafka and we notice that for every message we have a key (or null). I searched for more info about the key itself and the documentation says that it is only used to decide the partition where the message is placed. Is there a problem if we use keys

Re: Queue implementation

2016-03-30 Thread Helleren, Erik
I don¹t follow. By having two consumer objects on C3, you can consume a portion of the messages from both T1 and T2. So, Group1(C1,C2,C3) is subscribed to topic Topic T1. Group2(C3,C4) is subscribed to topic T2 If you want C3 to consume all messages on T1 and T2, it would need to be in a

Re: KStream-KTable join with the KTable given a "head start"

2016-03-30 Thread Jeff Klukas
-- Forwarded message -- > From: Jeff Klukas > To: users@kafka.apache.org > Cc: > Date: Wed, 30 Mar 2016 11:14:53 -0400 > Subject: KStream-KTable join with the KTable given a "head start" > I have a KStream that I want to enrich with some values from a lookup >

Re: KStream-KTable join with the KTable given a "head start"

2016-03-30 Thread Guozhang Wang
Hi Jeff, This is a common case of stream-table join, in that the joining results depending on the arrival ordering from these two sources. In Kafka Streams you can try to "synchronize" multiple input streams through the "TimestampExtractor" interface, which is used to assign a timestamp to each

Re: Using Kafka for persistence

2016-03-30 Thread Rick Mangi
This sounds like a square peg in a round hole sort of solution. That said, you might want to look at the work being done with kafka-streams to expose a topic as a table. > On Mar 30, 2016, at 3:23 PM, Michael D. Spence wrote: > > > Any advice on using Kafka to store the

Re: Using Kafka for persistence

2016-03-30 Thread Michael D. Spence
Any advice on using Kafka to store the actual messages? On 3/22/2016 6:32 PM, Michael D. Spence wrote: We have to construct a messaging application that functions as a switch between other applications in the enterprise. Since our switch need only have a few days worth of messages, we are

Re: Using the new consumer client API 0.0.9

2016-03-30 Thread Jason Gustafson
Hi Oleg, The binary protocol is compatible, so you don't have to worry about 0.9 consumers not working with 0.10. But the API changes to the Java client are not binary compatible (you will have to recompile your code to use the 0.10 version of the client). Here is the KIP which details the

How do I restore dead Kafka brokers?

2016-03-30 Thread Eric Hyunwoo Na
Hi, I had a Kafka cluster with three brokers. I killed two of them by mistake. I restarted them with the same server.properties config files that was used in running them the first time, but it is not functioning correctly. By this I mean when I run bin/kafka-console-consumer.sh --zookeeper

Re: Using the new consumer client API 0.0.9

2016-03-30 Thread Oleg Zhurakousky
Jason Are those API changes you mentioned binary compatible with previous release? Cheers Oleg > On Mar 30, 2016, at 12:03 PM, Jason Gustafson wrote: > > Hi Prabhakar, > > We fixed a couple critical bugs in the 0.9.0.1 release, so you should > definitely make sure to use

Re: Using the new consumer client API 0.0.9

2016-03-30 Thread Jason Gustafson
Hi Prabhakar, We fixed a couple critical bugs in the 0.9.0.1 release, so you should definitely make sure to use that version if you want to try it out. Since then, we've mostly been tweaking the behavior for some edge cases and trying to improve messaging. I'd recommend giving it a shot. The

Re: ConsumerRebalanceFailedException with the kafka-console-consumer (bug?)

2016-03-30 Thread Filipe Correia
I've also asked on stackoverflow, in case you prefer to answer there: http://stackoverflow.com/questions/36313470/consumerrebalancefailedexception-with-the-kafka-console-consumer Thanks, Filipe On Wed, Mar 30, 2016 at 4:03 PM, Filipe Correia wrote: > Hi there, > >

KStream-KTable join with the KTable given a "head start"

2016-03-30 Thread Jeff Klukas
I have a KStream that I want to enrich with some values from a lookup table. When a new key enters the KStream, there's likely to be a corresponding entry arriving on the KStream at the same time, so we end up with a race condition. If the KTable record arrives first, then its value is available

ConsumerRebalanceFailedException with the kafka-console-consumer (bug?)

2016-03-30 Thread Filipe Correia
Hi there, I've just installed kafka 0.9.0.1, and I'm getting the following error when launching the kafka-console-consumer: $ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic myrandomtesttopic --from-beginning [2016-03-30 15:46:17,568] ERROR Unknown error when running consumer:

Re: consumer group, why commit requests are not considered as effective heartbeats?

2016-03-30 Thread Caesar Ralf Franz Hoppen
I would like to add a little more to this context, the problem is not hard to reproduce. If you are using - auto commit - heartbeat time = commit time - more than one consumer It seems that is always failing to send the heart beat. Changing the values for the heartbeat and commit to be

Re: KafkaProducer "send" blocks on first attempt with Kafka server offline

2016-03-30 Thread Oleg Zhurakousky
I'll buy both 'back pressure' and 'block' argument, but what does it have to do with the Future? Isn't that the main point of the Future - a reference to an invocation that may or may not occur some time in the future? Isn't that the purpose of the Future.get(..) to give user a choice and

Kafka Batch and Fetch

2016-03-30 Thread manish jaiswal
Hi, I am new to Kafka ,I have a doubt in Kafka batching. In Producer 9.0 how batching works if I Set batchsize 10mb then all messages in 10 mb batch go to one offset in Kafka broker? In new high level consumer how fetchsize work? It's like per message fetchsize inside one offset or whole

kafka broker sizing

2016-03-30 Thread manish jaiswal
Hi, Can we set Kafka broker size? Suppose my system space 500gb.can I set Kafka broker with 100gb in my system? Thanks Manish

Re: cluster no response due to replication

2016-03-30 Thread jinhong lu
anyone help? > 在 2016年3月29日,18:57,jinhong lu 写道: > > > > Hi, I found this log in my server.log. > > The offset of replication is larger than leader's, so the replication data > will delete, and then copy the the data from leader. > But when copying, the cluster is very

Re: Why does "unknown" show up in the output when describing a group using the ConsumerGroupCommand?

2016-03-30 Thread Michael Freeman
Was wondering the same. From what I can tell it shows unknown when no committed offset is recorded for that partition by the consumer. On Mon, Mar 28, 2016 at 12:25 PM, craig w wrote: > When using the ConsumerGroupCommand to describe a group (using > new-consumer, 0.9.0.1)