Re: Kafka constant shrinking and expanding after deleting a topic

2016-04-05 Thread Alexis Midon
I ran into the same issue today. In a production cluster, I noticed the "Shrinking ISR for partition" log messages for a topic deleted 2 months ago. Our staging cluster shows the same messages for all the topics deleted in that cluster. Both 0.8.2 Yifan, Guozhang, did you find a way to get rid of

Re: KafkaProducer Retries in .9.0.1

2016-04-05 Thread Manikumar Reddy
Hi, Producer message size validation checks ("buffer.memory", "max.request.size" ) happens before batching and sending messages. Retry mechanism is applicable for broker side errors and network errors. Try changing "message.max.bytes" broker config property for simulating broker side error.

KafkaProducer Retries in .9.0.1

2016-04-05 Thread christopher palm
Hi All, I am working with the KafkaProducer using the properties below, so that the producer keeps trying to send upon failure on Kafka .9.0.1. I am forcing a failure by setting my buffersize smaller than my payload,which causes the expected exception below. I don't see the producer retry to

Re: How new consumer get to know about zookeeper URL?

2016-04-05 Thread Ratha v
Thanks Ian On 6 April 2016 at 13:26, Ian Wrigley wrote: > Hi Ratha > > New Consumers don’t use ZooKeeper; all offsets are stored in a Kafka topic. > > Regards > > Ian. > > > On Apr 5, 2016, at 10:20 PM, Ratha v wrote: > > > > Hi all; > > Im using kafka

Re: How new consumer get to know about zookeeper URL?

2016-04-05 Thread Ian Wrigley
Hi Ratha New Consumers don’t use ZooKeeper; all offsets are stored in a Kafka topic. Regards Ian. > On Apr 5, 2016, at 10:20 PM, Ratha v wrote: > > Hi all; > Im using kafka 0.9.0.1 V with new consume APIs. > > I would like to know how new consumers get to know about

Re: Is there any behavioural change to connect local server and remote server?

2016-04-05 Thread Ratha v
Here is my current server.properties; (Do I need to change host.name too?) listeners=PLAINTEXT://:9092 # The port the socket server listens on port=9092 # Hostname the broker will bind to. If not set, the server will bind to all interfaces #host.name=localhost # Hostname the broker will

How new consumer get to know about zookeeper URL?

2016-04-05 Thread Ratha v
Hi all; Im using kafka 0.9.0.1 V with new consume APIs. I would like to know how new consumers get to know about the zookeeper's URL? Thanks -- -Ratha http://vvratha.blogspot.com/

Re: Log Retention: What gets deleted

2016-04-05 Thread Gwen Shapira
I think you got it almost right. The missing part is that we only delete whole partition segments, not individual messages. As you are writing messages, every X bytes or Y milliseconds, a new file gets created for the partition to store new messages in. Those files are called segments. The

WARN Error while fetching metadata with correlation id 1 : {MY_TOPIC?=INVALID_TOPIC_EXCEPTION} (org.apache.kafka.clients.NetworkClient)

2016-04-05 Thread Ratha v
Hi all; when I run the following command with kafka 0.9.0.1 i get this warnings[1]. Can you please tell me what is wrong with my topics? (I'm talking to the kafka broker which runs in ec2) *#./kafka-console-consumer.sh --new-consumer --bootstrap-server kafka.xx.com:9092

Re: Is there any behavioural change to connect local server and remote server?

2016-04-05 Thread Ratha v
Hi Ewen; Thanks ..Yes broker configuration has been set as you mentioned. But when i try following command, i see this exception..Do you know the reason? * #kafka-console-consumer.sh --new-consumer --bootstrap-server kafka.xx.com:9092 --topic TEST_NPB4?* [2016-04-06

Re: Kafka constant shrinking and expanding after deleting a topic

2016-04-05 Thread Guozhang Wang
It is possible, there are some discussions about a similar issue in KIP: https://cwiki.apache.org/confluence/display/KAFKA/KIP-53+-+Add+custom+policies+for+reconnect+attempts+to+NetworkdClient mailing thread: https://www.mail-archive.com/dev@kafka.apache.org/msg46868.html Guozhang On Tue,

Re: Kafka Streams: context.forward() with downstream name

2016-04-05 Thread Guozhang Wang
HI Josh, Re 1): transform is only for usage in the higher-level DSL, while in the lower-level APIs people are expected to work with Processor only, which for now use context.forward() to send record to the downstream processors. Re 2): I have a few questions for your propose: with different

Re: Error happen when kafka 0.9 client try connect with server, with SSL enabled

2016-04-05 Thread westfox
Ismael, It works after follow your advice. Thanks a lot! Ping On Tue, Apr 5, 2016 at 5:16 PM, Ismael Juma wrote: > Hi Ping, > > The problem is advertised.host.name, which only advertises a PLAINTEXT > port. You should use advertised.listeners instead. > > Ismael > > On Tue,

Re: Error happen when kafka 0.9 client try connect with server, with SSL enabled

2016-04-05 Thread Ismael Juma
Hi Ping, The problem is advertised.host.name, which only advertises a PLAINTEXT port. You should use advertised.listeners instead. Ismael On Tue, Apr 5, 2016 at 8:42 PM, westfox wrote: > Hi, > > Got error when client try to talk with kafka server kafka_2.11-0.9.0.1, > with

Re: Kafka Streams: context.forward() with downstream name

2016-04-05 Thread josh gruenberg
Hi Guozhang, I'll reply to your points in-line below: On Tue, Apr 5, 2016 at 10:23 AM Guozhang Wang wrote: > Hi Josh, > > I think there are a few issues that we want to resolve here, which could be > orthogonal to each other. > > 1) one-to-many mapping in transform()

Error happen when kafka 0.9 client try connect with server, with SSL enabled

2016-04-05 Thread westfox
Hi, Got error when client try to talk with kafka server kafka_2.11-0.9.0.1, with SSL enabled. same error happen no matter consumer or producer. anyone can help? Thanks Ping == Errro msg at server side [2016-04-05 19:26:04,836] ERROR [KafkaApi-1] error when

Re: Is there any behavioural change to connect local server and remote server?

2016-04-05 Thread Ewen Cheslack-Postava
Ratha, In EC2, you probably need to use the advertised.listeners setting (or advertised.host and advertised.port on older brokers). This is because EC2 has internal and external addresses for each instance. -Ewen On Tue, Apr 5, 2016 at 5:13 AM, Ratha v wrote: > Hi all;

Low-level Consumer Example (Scala)

2016-04-05 Thread Afshartous, Nick
Hi, I'm looking for a complete low-level consumer example. Ideally one in Scala that continuously consumes from a topic and commits offsets. Thanks for any pointers, -- Nick

Question on Java client consumer.seek()

2016-04-05 Thread Mario Ricci
I found this thread and see that poll() must be called before seek(). This seems unintuitive and the error message ("No current assignment for partition

Re: Kafka Streams: context.forward() with downstream name

2016-04-05 Thread Guozhang Wang
Hi Josh, I think there are a few issues that we want to resolve here, which could be orthogonal to each other. 1) one-to-many mapping in transform() function that generates a single stream (i.e. single typed key-value pairs). Since transform() already enforces to make type-safe return values,

data not replicated to followers by leader

2016-04-05 Thread maverick m .
We saw strange behavior with kafka 0.8.2 brokers today. Scenario: We have 3 kafka brokers in dev and each topic has replication degree 3. We have a topic: X with 10 partitions. There are about 30 topics that we have on the cluster. We saw that just for topic X 1 partition was not replicated

Re: Kafka constant shrinking and expanding after deleting a topic

2016-04-05 Thread Guozhang Wang
These configs are mainly dependent on your publish throughput, since the replication throughput is higher bounded by the publish throughput. If the publish throughput is not high, then setting a lower threshold values in these two configs will cause churns in shrinking / expanding ISRs. Guozhang

Re: Kafka Streams: context.forward() with downstream name

2016-04-05 Thread josh gruenberg
Hi all, Just chiming in with Yuto: I think the custom Processor becomes attractive in scenarios where a node in the graph may emit to a variety of downstream paths, possibly after some delay, depending on logic. This can probably often be achieved with the existing DSL using some combination of

Disable offset management

2016-04-05 Thread Jakub Neubauer
Hi, we are using Kafka server + new client 0.9.0.1. In our application we process log events and aggregate the results in RDBMS. In order to achieve data consistency, when we process a polled log event list (a batch), we update DB in one transaction and we also save the partition offsets in the

Is there any behavioural change to connect local server and remote server?

2016-04-05 Thread Ratha v
Hi all; Is there any different connection mechanism for local and remote (ec2 instance) server. Im asking because, my consumer is not working with remote server. -- -Ratha http://vvratha.blogspot.com/

Message reconsumed with 'earliest' offset reset 0.9.0.1

2016-04-05 Thread Michael Freeman
Hi, I'm using the 0.9.0.1 consumer with 'earliest' offset reset. After cleanly shutting down the consumers and restarting I see reconsumption of some old messages. The offset of the reconsumed messages is 0. If I'm committing cleanly and shutting down cleanly why is the committed offset

RE: New consumer API waits indefinitely

2016-04-05 Thread Lohith Samaga M
Hi Ismael, Niko, After cleaning up the zookeeper and kafka logs, I do not get the below server exception anymore. I think Kafka did not like me opening the .log file in notepad. The only exception that I now get is

RE: New consumer API waits indefinitely

2016-04-05 Thread Lohith Samaga M
Hi Ismael, I see the following exception when I (re)start Kafka (even a fresh setup after the previous one). And where is the configuration to set the data directory for Kafka (not the logs)? java.io.IOException: The requested operation cannot be performed on a file with a user-mapped

RE: New consumer API waits indefinitely

2016-04-05 Thread Lohith Samaga M
Thanks Niko! I think I missed an org.apache.kafka.clients.consumer.internals.SendFailedException exception at the very beginning (or atleast it is giving an exception today). Even after using a new install of Kafka, I get the same errors. Strangely, all topics are re-created in the logs. I