Re: [DISCUSS] KIP-45 Standardize all client sequence interaction on j.u.Collection.

2016-01-27 Thread Ismael Juma
Hi Pierre and Jason, A comment below. On Wed, Jan 27, 2016 at 9:01 PM, Jason Gustafson wrote: > Hi Pierre, > > Thanks for your persistence on this issue. I've gone back and forth on this > a few times. The current API can definitely be annoying in some cases, but > breaking

Re: [DISCUSS] KIP-45 Standardize all client sequence interaction on j.u.Collection.

2016-01-27 Thread Gwen Shapira
I have a minor preference toward modifying the API. Because it is source-compatible and protocol-compatible, the only case that will break is if you use client code from one version but run with a JAR from a different version, which sounds like a pretty weird setup in general. Its not a strong

Re: HELP PLEASE->Kafka 0.9.0.0 create topic throwing ERROR kafka.admin.AdminOperationException: replication factor: 1 larger than available brokers: 0 with zookeeper 3.4.6

2016-01-27 Thread Gwen Shapira
Did you check your brokers are running? On Wed, Jan 27, 2016 at 1:30 PM, Sandhu, Dilpreet wrote: > Hi all, > I am using Kafka 0.9.0.0 with Zookeeper 3.4.6. I am not sure if I > am missing anything :( > When I try to create any topic I get the following error:- > >

Re: HELP PLEASE->Kafka 0.9.0.0 create topic throwing ERROR kafka.admin.AdminOperationException: replication factor: 1 larger than available brokers: 0 with zookeeper 3.4.6

2016-01-27 Thread Sandhu, Dilpreet
Hi Gwen, Thanks for responding back. Issue was with my Init.d script. Now its resolved. Thanks once again for your valuable time and help. Best regards, Dilpreet On 1/27/16, 2:24 PM, "Gwen Shapira" wrote: >Did you check your brokers are running? > >On Wed, Jan 27, 2016

Kafka 0.9 mirror maker - different destination topic name

2016-01-27 Thread Mhaskar, Tushar
Hi All, How to use Kafka 0.9 MirrorMaker to produce data to different topic name. Gwen has informed that --message.handler option needs to be used, but I couldn't find any documentation related to that. Any pointers will be useful. Thanks, Tushar

Re: Kakfa Connect Converter per Connector

2016-01-27 Thread Gwen Shapira
Hi Eric, 1. You are correct that the way to handle custom data formats in Kafka is to use a custom convertor. 2. You are also correct that we are currently assuming one converter per Connect instance / cluster that all connectors share (in the hope that each organization has one common data

Re: Offset storage issue with kafka(0.8.2.1)

2016-01-27 Thread James Cheng
> On Jan 27, 2016, at 8:25 PM, Sivananda Reddys Thummala Abbigari > wrote: > > Hi, > > # *Kafka Version*: 0.8.2.1 > > # *My consumer.propeties have the following properties*: >exclude.internal.topics=false >offsets.storage=kafka >dual.commit.enabled=false >

Offset storage issue with kafka(0.8.2.1)

2016-01-27 Thread Sivananda Reddys Thummala Abbigari
Hi, # *Kafka Version*: 0.8.2.1 # *My consumer.propeties have the following properties*: exclude.internal.topics=false offsets.storage=kafka dual.commit.enabled=false # With the above configuration the offsets should be stored in kafka instead of zookeeper but I see that offsets are

Accumulating data in Kafka Connect source tasks

2016-01-27 Thread Randall Hauch
I’m creating a custom Kafka Connect source connector, and I’m running into a situation for which Kafka Connect doesn’t seem to provide a solution out of the box. I thought I’d first post to the users list in case I’m just missing a feature that’s already there. My connector’s SourceTask

Re: Getting very poor performance from the new Kafka consumer

2016-01-27 Thread Rajiv Kurian
Hi Guozhang, The Github link I pasted was from the 0.9.0 branch. The same line seems to be throwing exceptions in my code built of the maven 0.9.0.0 package. Are you saying that something else has changed higher up the call stack that will probably not trigger so many exceptions ? Thanks, Rajiv

Question on Using many producers simultaneously

2016-01-27 Thread Anirudh P
Hello, We have a scenario where we will be having a large number of producers( ~50k instances - 1 Producer per instance) which will be sending data to a kafka topic. The producers could be inactive for a long time and then be asked to send a message(all at the same time) and as quickly as

Re: Kafka 0.9 -> consumer.poll() occasionally returns 0 elements

2016-01-27 Thread Jason Gustafson
Hey Tao, If you increase "receive.buffer.bytes" to 64K, can you still reproduce the problem? -Jason On Tue, Jan 26, 2016 at 11:18 PM, Krzysztof Ciesielski < krzysztof.ciesiel...@softwaremill.pl> wrote: > Jason, > > My os/vm is OSX 10.11.3, JDK 1.8.0.40 > > — > Krzysztof > On 26 January 2016 at

Re: Getting very poor performance from the new Kafka consumer

2016-01-27 Thread Jason Gustafson
Hey Rajiv, Thanks for the detailed report. Can you go ahead and create a JIRA? I do see the exceptions locally, but not nearly at the rate that you're reporting. That might be a factor of the number of partitions, so I'll do some investigation. -Jason On Wed, Jan 27, 2016 at 8:40 AM, Rajiv

Shutting down Producer

2016-01-27 Thread Joe San
Is it mandatory to properly shutdown a Kafka producer? I have a single producer instance in my web application. When we deploy / restart this web application, we terminate the JVM process and start the web application all over again afresh. So why should I worry about calling the close method on

Only interested in certain partitions

2016-01-27 Thread Jens Rantil
Hi, Background: I am using Kafka 0.9 using the Java client. I have a consumer with a fixed set of keys that it is interested in from a given topic. Assuming I have many partitions, I could manually assign my consumer to only listen to the relevant partitions, given my keys. Question: Given a key

Re: Getting very poor performance from the new Kafka consumer

2016-01-27 Thread Rajiv Kurian
Hi Jason, Thanks for investigating. Indeed we do have probably more than the usual number of partitions. Our use case is such that we have many partitions (128 - 256) overall but very few messages per second on each partition. I have created a JIRA at

Re: Shutting down Producer

2016-01-27 Thread Ewen Cheslack-Postava
If you don't shut it down properly and there are outstanding requests (e.g. if you call producer.send() and don't call get() on the returned future), then you could potentially lose data. Calling producer.close() flushes all the data before returning, so shutting down properly ensures no data will

Re: Only interested in certain partitions

2016-01-27 Thread Ewen Cheslack-Postava
One option is to instantiate and invoke the DefaultPartitioner yourself (or whatever partitioner you've specified for partitioner.class). However, that will require passing in a Cluster object, which you'll need to construct yourself. This is just used to get the number of partitions for the topic

Kafka 0.9.0.0 create topic throwing KeeperErrorCode = NoNode for /brokers/ids with zookeeper 3.4.6

2016-01-27 Thread Sandhu, Dilpreet
Hi all, I am using Kafka 0.9.0.0 with Zookeeper 3.4.6. I am not sure if I am missing anything :( When I try to create any topic I get the following error:- Error while executing topic command : replication factor: 1 larger than available brokers: 0 [2016-01-27 20:35:53,738] ERROR

Re: [DISCUSS] KIP-45 Standardize all client sequence interaction on j.u.Collection.

2016-01-27 Thread Jason Gustafson
Hi Pierre, Thanks for your persistence on this issue. I've gone back and forth on this a few times. The current API can definitely be annoying in some cases, but breaking compatibility still sucks. We do have the @Unstable annotation on the API, but it's unclear what exactly it means and I'm

Kakfa Connect Converter per Connector

2016-01-27 Thread Eric Lachman
Hi, I am trying out Kafka connect and have a couple questions. We are directly publishing raw binary data to kafka from one of our apps and wanted to create a Kafka Connector Sink to move the raw data to something like Cassandra. Since this data is directly published to Kafka it doesn't have

Re: Kafka 0.9.0.0 create topic throwing ERROR kafka.admin.AdminOperationException: replication factor: 1 larger than available brokers: 0 with zookeeper 3.4.6

2016-01-27 Thread Sandhu, Dilpreet
Ignore my last email sorry >Hi all, >I am using Kafka 0.9.0.0 with Zookeeper 3.4.6. I am not sure if I >am missing anything :( >When I try to create any topic I get the following error:- > > >Error while executing topic command : replication factor: 1 larger than >available brokers: 0 >

Re: [DISCUSS] KIP-45 Standardize all client sequence interaction on j.u.Collection.

2016-01-27 Thread Pierre-Yves Ritschard
Hi Jason, Thanks for weighing in on this. Here's my take: - I initially opted for overloading, but this met resistance (most vocally from Jay Kreps). I don't have strong feelings either way (I tend to prefer the current proposal without overloading but would understand the need to add it

Sorry. Figured it was Init.d script.

2016-01-27 Thread Sandhu, Dilpreet
Sorry for spamming you all with questions. Figured out it was my init.d script that was broken. :( On 1/27/16, 1:30 PM, "Sandhu, Dilpreet" wrote: >Hi all, >I am using Kafka 0.9.0.0 with Zookeeper 3.4.6. I am not sure if I >am missing anything :( >When I try to

Re: Broker Exception: Attempt to read with a maximum offset less than start offset

2016-01-27 Thread Robert Metzger
Yes, I've asked the user to test with the 0.9.0.0 release (I saw Gwen's comment in KAFKA-725). I have a potentially related question: Is it an issue that both Flink and Gearpump* are not committing their offsets through the SimpleConsumer API? Flink is directly committing the offsets into ZK (and

Re: Broker Exception: Attempt to read with a maximum offset less than start offset

2016-01-27 Thread Ismael Juma
Hi Manu and Robert, It would help to know if this still happens in trunk or the 0.9.0 branch. Ismael On 27 Jan 2016 13:05, "Robert Metzger" wrote: > Hi Manu, > > in the streaming-benchmark, are seeing the issue only when reading with > Gearpump, or is it triggered by a

Re: Broker Exception: Attempt to read with a maximum offset less than start offset

2016-01-27 Thread Robert Metzger
Hi Manu, in the streaming-benchmark, are seeing the issue only when reading with Gearpump, or is it triggered by a different processing framework as well? I'm asking because there is a Flink user who is using Kafka 0.8.2.1 as well who's reporting a very similar issue on SO: