Re: Problem consuming from broker 1.1.0

2018-06-12 Thread Manikumar
Can you post consumer debug logs? You can enable console consumer debug logs here: kafka/config/tools-log4j.properties On Wed, Jun 13, 2018 at 9:55 AM Craig Ching wrote: > Hi! > > We’re having a problem with a new kafka cluster at 1.1.0. The problem is, > in general, that consumers can’t consum

Problem consuming from broker 1.1.0

2018-06-12 Thread Craig Ching
Hi! We’re having a problem with a new kafka cluster at 1.1.0. The problem is, in general, that consumers can’t consume from the different broker (old broker was 0.11 I think). The easiest recipe I have for reproducing the problem is that downloading kafka 1.0.1 and running console consumer ca

Re: Details of segment deletion

2018-06-12 Thread Ted Yu
Minor clarification (since new segment appeared twice) : bq. before a new one is deleted. The 'new one' (in the last sentence) would become old when another segment is created. Cheers On Tue, Jun 12, 2018 at 6:42 PM, Gwen Shapira wrote: > See below: > > On Mon, Jun 11, 2018 at 3:36 AM, Simon

Re: Details of segment deletion

2018-06-12 Thread Gwen Shapira
See below: On Mon, Jun 11, 2018 at 3:36 AM, Simon Cooper < simon.coo...@featurespace.co.uk> wrote: > Hi, > > I've ben trying to work out the details of when exactly kafka log segments > get deleted for to the retention period, so it would be helpful if someone > could clarify the behaviour: > > >

Re: INVALID_FETCH_SESSION_EPOCH after upgrade to 1.1.0

2018-06-12 Thread Ted Yu
Before Errors.INVALID_FETCH_SESSION_EPOCH is returned, FetchSession.scala would log the reason for the response. There are 3 cases, 2 with info log and 1 with debug log. Here is one code snippet: if (session.epoch != reqMetadata.epoch()) { debug(s"Created a new error Fet

INVALID_FETCH_SESSION_EPOCH after upgrade to 1.1.0

2018-06-12 Thread Mark Anderson
We recently updated our Kafka brokers and clients to 1.1.0. Since the upgrade we periodically see INFO log entries such as INFO Jun 08 08:30:20.335 61161458 [KafkaRecordConsumer-0] org.apache.kafka.clients.FetchSessionHandler [Consumer clientId=consumer-1, groupId=group_60_10] Node 3 was unable to

Re: kafka Broker Znode TTL

2018-06-12 Thread harish lohar
Hi, This issue happens when your zookeeper cluster is down , in that case the znode is not removed. We are running both zookeeper and kafka at same machine , hence if the machine goes down causing <= 50% node in zookeeper cluster, there is no way currently to clear the /broker/ids/ nodes. Also

Hoping to see the community at Kafka Summit SF

2018-06-12 Thread Gwen Shapira
Hello Kafka users and contributors, Kafka Summit SF call for proposal is open until Saturday, June 16. You are all invited to submit your talk proposals. Sharing your knowledge, stories and experience is a great way to contribute to the community. I consistently notice that people with great stor

Transactional producer and storing offsets outside Kafka

2018-06-12 Thread Tushar Madhukar
*Kafka v1.1.0* Hi, I have been experimenting with the transactional producer in Kafka and have a question related to it. We are moving towards storing partition offsets outside of Kafka. We have a lot of consumers reading off few busy topics. We started seeing a lot of traffic on the __consum

Installation guide for multi node set up of Kafka

2018-06-12 Thread Sumit Baurai
Hi, Is there an official guide from Confluent that can be followed for setting up a multi-node Confluent Cluster. If yes, could you please point.. Thanks in anticipation *Sumit Baurai*

Does Kafka-1.1.0 support JDK 10

2018-06-12 Thread manideep G
Hi, Does kafka support JDK 10? In recent updated document only JDK 9 is mentioned. So it is advised to use JDK 10 for kafka- 1.1.0? We are planning for office use JDK 10 and Kafka- 1.1.0 . -- Best Regards, Manideep

Consumers cannot consume topic through the public network

2018-06-12 Thread 孟庆建
In the internal network environment, bootstrap.server fills in floating IP, which can be used for normal production and consumption. After the local configuration of bootstrap.server as the public network IP and port on the server, my producer can produce very very very slowly, and consumers can