Re: Kafka Producer - Producing to Multiple Topics

2020-08-21 Thread SenthilKumar K
it would be great if someone provides input(s)/hint :) thanks! --Senthil On Fri, Aug 21, 2020 at 3:28 PM SenthilKumar K wrote: > Updating the Kafka broker version: > > Kafka Version: 2.4.1 > > On Fri, Aug 21, 2020 at 3:21 PM SenthilKumar K > wrote: > >> Hi Team,

Re: Kafka Producer - Producing to Multiple Topics

2020-08-21 Thread SenthilKumar K
Updating the Kafka broker version: Kafka Version: 2.4.1 On Fri, Aug 21, 2020 at 3:21 PM SenthilKumar K wrote: > Hi Team, We have deployed 150 node Kafka cluster on production for our > use case. Recently I have seen issue(s) in Kafka Producer Client. > > Use Case: > --> (

Kafka Producer - Producing to Multiple Topics

2020-08-21 Thread SenthilKumar K
Hi Team, We have deployed 150 node Kafka cluster on production for our use case. Recently I have seen issue(s) in Kafka Producer Client. Use Case: --> (Consume)Stream App (Multiple Topologies) (Transform) --> Kafka Producer Topology ( Produce it to Multiple Topics) Initially, the data is

Re: KafkaConsumer.partitionsFor() Vs KafkaAdminClient.describeTopics()

2020-05-05 Thread SenthilKumar K
know in your scenario how you are managing > kafka/Zk cluster but for security purpose , Zookeeper access only limited > to kafka Cluster . > > > > > > *From: *SenthilKumar K > *Date: *Tuesday, May 5, 2020 at 12:06 PM > *To: *"Agrawal, Manoj (Cognizant)&quo

Re: KafkaConsumer.partitionsFor() Vs KafkaAdminClient.describeTopics()

2020-05-05 Thread SenthilKumar K
ist of topic return by > KafkaConsumer.partitionsFor() on by using method type , if this is > PartitionInfo.leader() then include those partition in list . > > > > On 5/5/20, 11:44 AM, "SenthilKumar K" wrote: > > [External] > > > Hi Team,

KafkaConsumer.partitionsFor() Vs KafkaAdminClient.describeTopics()

2020-05-05 Thread SenthilKumar K
Hi Team, We are using KafkaConsumer.partitionsFor() API to find the list of available partitions. After fetching the list of partitions, We use Consumer.offsetsForTimes() API to find the offsets for a given timestamp. The API Consumer.partitionsFor() simply returning all partitions including the

Re: Process offsets from particular point

2020-04-11 Thread SenthilKumar K
Hi, We can re-consume the data from particular point using consumer.seek() and consumer.assign() API [1]. Pls check out documentation. If you have used timestamp at the time producing records , You can use particular timestamp to consume records [2].

Re: Replicas more than replication-factor

2020-02-12 Thread SenthilKumar K
We are also facing the similar issue in our kafka cluster. Kafka Version: 2.2.0 RF: 5 PartitionLatest OffsetLeaderReplicasIn Sync ReplicasPreferred Leader?Under Replicated? 0 121 (121,50,51,52,53) (52,121,53,50,51) true false 1 122

Re: Moving partition(s) to different broker

2019-11-11 Thread SenthilKumar K
n? Producers and consumers should still > be able to access the topic in its current state. > > -- > Peter > > > On Nov 11, 2019, at 11:34 PM, SenthilKumar K > wrote: > > > > Hi Experts, We have seen a problem with partition leader i.e it's set > to -1. > &

Moving partition(s) to different broker

2019-11-11 Thread SenthilKumar K
Hi Experts, We have seen a problem with partition leader i.e it's set to -1. describe o/p: Topic: 1453 Partition: 47 Leader: -1 Replicas: 24,15 Isr: 24 Kafka Version: 2.2.0 Replication: 2 Partitions: 48 Brokers 24 ,15 both are down due to disk errors and we lost the partition 47. I tried

Re: Kafka Partition Leader -1

2019-11-10 Thread SenthilKumar K
gt; Hi, > > On 7 Nov 2019, at 09:18, SenthilKumar K wrote: > > Hello Experts , We are observing issues in Partition(s) when the Kafka > broker is down & the Partition Leader Broker ID set to -1. > > Kafka Version 2.2.0 > Total No Of Brokers: 24 > Total No Of Partiti

Re: How to list/stop all current partition assignment which is running

2019-11-07 Thread SenthilKumar K
I faced a similar issue when reassigning partitions to newly added brokers. Out of 400 partitions, 380 were successfully reassigned & the remaining 20 partitions stuck for more than 3 hours. I logged into the ZK server and cleaned the path rmr /kafka.primary/admin/reassign_partitions. Pls make a

Kafka Partition Leader -1

2019-11-07 Thread SenthilKumar K
Hello Experts , We are observing issues in Partition(s) when the Kafka broker is down & the Partition Leader Broker ID set to -1. Kafka Version 2.2.0 Total No Of Brokers: 24 Total No Of Partitions: 48 Replication Factor: 2 Min In sync Replicas: 1 Partition

Re: Kafka BootStrap : Error while deleting the clean shutdown file in dir /tmp/data (kafka.server.LogDirFailureChannel) : Caused by: OOM: Map failed

2019-09-04 Thread SenthilKumar K
stackoverflow.com/a/43675621 > > > > > > On Wed, Sep 4, 2019 at 2:59 PM SenthilKumar K > > wrote: > > > > > Hello Experts , We have deployed 10 node kafka cluster in production. > > > Recently two of the nodes went down due to network problem and we

Re: Kafka BootStrap : Error while deleting the clean shutdown file in dir /tmp/data (kafka.server.LogDirFailureChannel) : Caused by: OOM: Map failed

2019-09-04 Thread SenthilKumar K
Thanks Karolis. On Wed, 4 Sep, 2019, 5:57 PM Karolis Pocius, wrote: > I had the same issue which was solved by increasing max_map_count > https://stackoverflow.com/a/43675621 > > > On Wed, Sep 4, 2019 at 2:59 PM SenthilKumar K > wrote: > > > Hello Experts , We

Re: Kafka BootStrap : Error while deleting the clean shutdown file in dir /tmp/data (kafka.server.LogDirFailureChannel) : Caused by: OOM: Map failed

2019-09-04 Thread SenthilKumar K
gt; On Wed, Sep 4, 2019 at 1:27 PM Karolis Pocius > wrote: > > > I had the same issue which was solved by increasing max_map_count > > https://stackoverflow.com/a/43675621 > > > > > > On Wed, Sep 4, 2019 at 2:59 PM SenthilKumar K > > wrote: > > >

Kafka BootStrap : Error while deleting the clean shutdown file in dir /tmp/data (kafka.server.LogDirFailureChannel) : Caused by: OOM: Map failed

2019-09-04 Thread SenthilKumar K
Hello Experts , We have deployed 10 node kafka cluster in production. Recently two of the nodes went down due to network problem and we brought it up after 24 hours. At the time of bootstrapping the kafka service on the failed nodes , we have seen the below error & broker failed to come up.

Re: Does anyone fixed Producer TimeoutException problem ?

2019-07-03 Thread SenthilKumar K
d > this much for streaming applications ? > > Regards, > Shyam > > On Wed, Jul 3, 2019 at 1:30 PM SenthilKumar K > wrote: > >> `*Partition = -1` - *This explains why are you getting timeout error. >> >> Why dont you use Default Partitioner ?: >> https:/

Re: Does anyone fixed Producer TimeoutException problem ?

2019-07-03 Thread SenthilKumar K
Partition = " + p ); > > On key i am doing hashCode(). What need to be corrected here to avoid this > negative number partition number ? i.e. Partition = -1 > > What should be my partition key logic like ? > > any help highly appreciated. > Regards, > Shyam > > On

Re: Does anyone fixed Producer TimeoutException problem ?

2019-07-02 Thread SenthilKumar K
should run fine in my local right. > I tried several producer configurations combinations as explained in the > SOF link. > > So not sure now what is the issue and how to fix it ? > > Is in your case the issue fixed ? > > Regards, > Shyam > > On Tue, Jul 2, 2019 at 5

Re: Does anyone fixed Producer TimeoutException problem ?

2019-07-02 Thread SenthilKumar K
Hi Shyam, We also faced `TimeoutException: Expiring 1 record(s)` issue in our Kafka Producer Client. As described here , first we tried increasing request timeout but that

Kafka Offsets

2019-07-01 Thread SenthilKumar K
Hello Experts, We are trying to understand *"How Kafka Assign Offset value to a message?"* Kafka Version : 2.2.0 Kafka Client: 1.1.0 For a Topic , Using Java Consumer API we consumed data from Partition 0 ( total 48 Partitions ) and below are the offset numbers. 181933 181935 181936 181939

Re: First time building a streaming app and I need help understanding how to build out my use case

2019-06-10 Thread SenthilKumar K
```*When I get a request for all of the messages containing a given user ID, I need to query in to the topic and get the content of those messages. Does that make sense and is it a think Kafka can do?*``` - If i understand correctly , your requirement is to Query the Kafka Topics based on key.

Re: Customers are getting same emails for roughly 30-40 times

2019-05-24 Thread SenthilKumar K
Hi, You can check Consumer Api https://kafka.apache.org/10/javadoc/?org/apache/kafka/clients/consumer/KafkaConsumer.html . Refer : Manual Offset Control --Senthil On Sat, May 25, 2019, 9:53 AM ASHOK MACHERLA wrote: > Dear Hans > > Thanks for you reply > > As you said we are getting same

Topic Creation two different config(s)

2019-05-08 Thread SenthilKumar K
Hello Experts , We have a requirement to create topic dynamically with two different config(s). Is this possible in Kafka ? Kafka Version : 2.2.0 Topics with different settings: #1 - Set retention as 24 hours for free tier customers # 2 - Set retention as 72 hours for paid customers Note :

Re: Mirror Maker tool is not running

2019-05-07 Thread SenthilKumar K
Looks like you are hitting : https://issues.apache.org/jira/browse/KAFKA-6947 or https://jira.apache.org/jira/browse/KAFKA-6177 --Senthil On Mon, May 6, 2019 at 6:32 PM ASHOK MACHERLA wrote: > Dear Team Members > > > > Please find these below configurations for mirror maker tool scripts > > >

Re: Required guidelines for kafka upgrade

2019-05-04 Thread SenthilKumar K
Can you verify your producer and consumer commands ? Console Producer : ./bin/kafka-console-producer.sh --broker-list :9093 --producer.config /kafka/client-ssl.properties --topic kafka_220 Console Consumer: ./bin/kafka-console-consumer.sh --bootstrap-server :9093 --consumer.config

Re: Required guidelines for kafka upgrade

2019-05-03 Thread SenthilKumar K
led > authentication with /192.168.175.128<http://192.168.175.128/> (SSL > handshake failed) (org.apache.kafka.common.network.Selector) > > getting logs all brokers, > > I tried to produce sample messages to topic, > consumer is not print that messages . > > ple

Re: Required guidelines for kafka upgrade

2019-05-03 Thread SenthilKumar K
nd restart your broker. --Senthil On Fri, May 3, 2019 at 10:20 PM SenthilKumar K wrote: > Here is my server.properties. > > > reserved.broker.max.id = 2147483647 > log.retention.bytes = 68719476736 > listeners = SSL://xx:9093 > socket.receive.buffer.b

Re: Required guidelines for kafka upgrade

2019-05-03 Thread SenthilKumar K
Here is my server.properties. reserved.broker.max.id = 2147483647 log.retention.bytes = 68719476736 listeners = SSL://xx:9093 socket.receive.buffer.bytes = 102400 broker.id = xxx ssl.truststore.password = x auto.create.topics.enable = true ssl.enabled.protocols = TLSv1.2

Re: Required guidelines for kafka upgrade

2019-05-03 Thread SenthilKumar K
Hi, if you see SSL issue try setting ssl.endpoint.identification.algorithm= Simply leave it empty no double quote . It would be good if you share error message from broker logs. --Senthil On Fri, May 3, 2019, 9:36 PM Harper Henn wrote: > What specific errors are you seeing in the server logs

Re: Required guidelines for kafka upgrade

2019-05-02 Thread SenthilKumar K
Hi Ashok , I'd suggest you to do this exercise on your SQA environment before making any change to Prod. Thanks! --Senthil On Thu, May 2, 2019 at 11:35 AM SenthilKumar K wrote: > Hi , > #1 - Download stable version 2.2.0 [kafka_2.11-2.2.0.tgz > <https://www.apache.org/dyn/clo

Re: Required guidelines for kafka upgrade

2019-05-02 Thread SenthilKumar K
Hi , #1 - Download stable version 2.2.0 [kafka_2.11-2.2.0.tgz ] #2 - Update server.properties with below values. inter.broker.protocol.version=0.10.1 log.message.format.version=0.10.1 #3 - Make

Re: Required guidelines for kafka upgrade

2019-04-26 Thread SenthilKumar K
ry?? > > > > Could please explain like that. > > > > Sent from Outlook<http://aka.ms/weboutlook> > > > > From: SenthilKumar K > > Sent: 26 April 2019 16:03 > > To: users@kafka.apache.org > > Subject: Re: Required guid

Re: Required guidelines for kafka upgrade

2019-04-26 Thread SenthilKumar K
Hi , You can refer official documentation to upgrade Kafka Cluster . Section: 1.5 Upgrading From Previous Versions Last week we did broker upgrade from 1.1.0 to 2.2.0. I think the current stable version is 2.2.0. --Senthil On Fri, Apr 26, 2019, 3:54 PM ASHOK MACHERLA wrote: > Dear Team > >

Re: kafka.common.StateChangeFailedException: Failed to elect leader for partition XXX under strategy PreferredReplicaPartitionLeaderElectionStrategy

2018-11-15 Thread SenthilKumar K
Adding Kafka Controller Log. [2018-11-15 11:19:23,985] ERROR [Controller id=4 epoch=8] Controller 4 epoch 8 failed to change state for partition XYXY-24 from OnlinePartition to OnlinePartition (state.change.logger) On Thu, Nov 15, 2018 at 5:12 PM SenthilKumar K wrote: > Hello Kafka Expe

kafka.common.StateChangeFailedException: Failed to elect leader for partition XXX under strategy PreferredReplicaPartitionLeaderElectionStrategy

2018-11-15 Thread SenthilKumar K
Hello Kafka Experts, We are facing StateChange Failed Exception on one of the broker. Out of 4 brokers, 3 were running fine and only one broker is throwing state change error. I dont find any error on Zookeeper Logs related to this error. Kafka Version : kafka_2.11-1.1.0 Any input

Optimal Message Size for Kafka

2018-09-07 Thread SenthilKumar K
Hello Experts, We are planning to use Kafka for large message set ( size various from 2 MB to 4 MB per event ). By setting message.max.bytes to 64 MB value this Kafka Producer allows large messages. But how does it impacts performance (both producer and consumer)? Would like to understand the

Re: Kafka Producer Partition Key Selection

2018-08-29 Thread SenthilKumar K
hil, > > In our case we use NULL as message Key to achieve even distribution in > producer. > With that we were able to achieve very even distribution with that. > Our Kafka client version is 0.10.1.0 and Kafka broker version is 1.1 > > > Thanks, > Gaurav > > On Wed, Aug

Kafka Producer Partition Key Selection

2018-08-29 Thread SenthilKumar K
Hello Experts, We want to distribute data across partitions in Kafka Cluster. Option 1 : Use Null Partition Key which can distribute data across paritions. Option 2 : Choose Key ( Random UUID ? ) which can help to distribute data 70-80%. I have seen below side effect on Confluence Page about

Re: Problem consuming from broker 1.1.0

2018-07-19 Thread SenthilKumar K
Hi Craig Ching, Reg. *We did end up turning on debug logs for the console consumer and found that one broker seemed to be having problems, it would lead to timeouts communicating with it. After restarting that broker, things sorted themselves out.* We had similar problem on prod cluster and i'm

Kafka Broker Not Responding

2018-07-19 Thread SenthilKumar K
Hello Kafka Experts, We are currently facing issue on our 3 node Kafka Cluster , one of the broker is not responding to any queries. I've checked logs but founding nothing related to this problem. Kafka Version: 1.1.0 Server.conf: ## Timeout properties to check prod outage

Kafka Consumer - WARN INVALID_FETCH_SESSION_EPOCH

2018-05-16 Thread SenthilKumar K
Hell All , Recently we have upgraded our brokers and clients to 1.1.0 and Consumer is frequently getting below error message. Node 1 was unable to process the fetch request with (sessionId=1388566672, epoch=5): INVALID_FETCH_SESSION_EPOCH. Broker is disconnecting clients. DEBUG Connection

Re: kafka in unrecoverable state

2018-04-07 Thread SenthilKumar K
I observed below error in one of the broker , and it is unresponsive ... 2018-04-07 12:51:39,830] ERROR [Replica Manager on Broker 3]: Error processing append operation on partition __consumer_offsets-27 (kafka.server.ReplicaManager) org.apache.kafka.common.errors.NotEnoughReplicasException:

Java Kafka Consumer Group Settings

2018-03-24 Thread SenthilKumar K
Hi Experts , I have a 3 node Kafka Cluster with inbound Traffic ~60Mbps. In the same cluster we are tying to consume the data using Java Kafka Consumer ( https://kafka.apache.org/0110/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html) & i could notice few problem in Java

Kafka Log deletion Problem

2018-02-02 Thread SenthilKumar K
Hello Experts , We have a Kafka Setup running for our analytics pipeline ...Below is the broker config .. max.message.bytes = 67108864 replica.fetch.max.bytes = 67108864 zookeeper.session.timeout.ms = 7000 replica.socket.timeout.ms = 3 offsets.commit.timeout.ms = 5000 request.timeout.ms =

Kafka Consumer - org.apache.kafka.common.errors.TimeoutException: Failed to get offsets by times in 305000 ms

2017-10-11 Thread SenthilKumar K
Hi All , Recently we starting seeing Kafka Consumer error with Timeout . What could be the cause here ? Version : kafka_2.11-0.11.0.0 Consumer Properties: *bootstrap.servers, enable.auto.commit,auto.commit.interval.ms ,session.timeout.ms

Re: Different Data Types under same topic

2017-08-18 Thread SenthilKumar K
+ dev experts for inputs. --Senthil On Fri, Aug 18, 2017 at 9:15 PM, SenthilKumar K <senthilec...@gmail.com> wrote: > Hi Users , We have planned to use Kafka for one of the use to collect data > from different server and persist into Message Bus .. > > Flow Would Be :

Different Data Types under same topic

2017-08-18 Thread SenthilKumar K
Hi Users , We have planned to use Kafka for one of the use to collect data from different server and persist into Message Bus .. Flow Would Be : Source --> Kafka --> Streaming Engine --> Reports We like to store different types of data in the same topic , same time data should be accessed

Re: Kafka write MB/s

2017-07-03 Thread SenthilKumar K
I tried benchmarking kafka producer with acks=1 in 5 node cluster .. Total transfer rate is ~950MB/sec .. Single broker transfer rate is less than 200MB/sec.. Load Generator: I've started 6 instance of http server where it writes to broker ... Using wrk2 http benchmarking tool I was able to send

Re: Handling 2 to 3 Million Events before Kafka

2017-06-22 Thread SenthilKumar K
Hi Barton - I think we can use Async Producer with Call Back api(s) to keep track on which event failed .. --Senthil On Thu, Jun 22, 2017 at 4:58 PM, SenthilKumar K <senthilec...@gmail.com> wrote: > Thanks Barton.. I'll look into these .. > > On Thu, Jun 22, 2017 at 7:12 AM,

Re: Handling 2 to 3 Million Events before Kafka

2017-06-21 Thread SenthilKumar K
r some examples: https://github.com/smallnest/ > C1000K-Servers . > > > > It seems possible with the right sort of kafka producer tuning. > > > > -Dave > > > > *From:* SenthilKumar K [mailto:senthilec...@gmail.com] > *Sent:* Wednesday, June 21, 2017 8:55 AM

Re: Handling 2 to 3 Million Events before Kafka

2017-06-21 Thread SenthilKumar K
kers > - consumers > > Is the problem that web servers cannot send to Kafka fast enough or your > consumers cannot process messages off of kafka fast enough? > What is the average size of these messages? > > -Dave > > -----Original Message- > From: SenthilKumar K [

Handling 2 to 3 Million Events before Kafka

2017-06-21 Thread SenthilKumar K
Hi Team , Sorry if this question is irrelevant to Kafka Group ... I have been trying to solve problem of handling 5 GB/sec ingestion. Kafka is really good candidate for us to handle this ingestion rate .. 100K machines > { Http Server (Jetty/Netty) } --> Kafka Cluster.. I see the problem

Re: Efficient way of Searching Messages By Timestamp - Kafka

2017-05-27 Thread SenthilKumar K
Hi Team , Any help here Pls ? Cheers, Senthil On Sat, May 27, 2017 at 8:25 PM, SenthilKumar K <senthilec...@gmail.com> wrote: > Hello Kafka Developers , Users , > > We are exploring the SearchMessageByTimestamp feature in Kafka for our > use case . > > Use Case

Efficient way of Searching Messages By Timestamp - Kafka

2017-05-27 Thread SenthilKumar K
Hello Kafka Developers , Users , We are exploring the SearchMessageByTimestamp feature in Kafka for our use case . Use Case : Kafka will be realtime message bus , users should be able to pull Logs by specifying start_date and end_date or Pull me last five minutes data etc ... I did POC

Re: Kafka Read Data from All Partition Using Key or Timestamp

2017-05-25 Thread SenthilKumar K
offsetsForTimes() > > https://kafka.apache.org/0102/javadoc/org/apache/kafka/ > clients/consumer/Consumer.html#offsetsForTimes(java.util.Map) > > -hans > > > On May 25, 2017, at 6:39 AM, SenthilKumar K <senthilec...@gmail.com> > wrote: > > > > I did an experim

Re: Kafka Read Data from All Partition Using Key or Timestamp

2017-05-25 Thread SenthilKumar K
advise me here! Cheers, Senthil On Thu, May 25, 2017 at 3:36 PM, SenthilKumar K <senthilec...@gmail.com> wrote: > Thanks a lot Mayuresh. I will look into SearchMessageByTimestamp feature > in Kafka .. > > Cheers, > Senthil > > On Thu, May 25, 2017 at 1:12 P

Kafka Read Data from All Partition Using Key or Timestamp

2017-05-24 Thread SenthilKumar K
Hi All , We have been using Kafka for our Use Case which helps in delivering real time raw logs.. I have a requirement to fetch data from Kafka by using offset .. DataSet Example : {"access_date":"2017-05-24 13:57:45.044","format":"json","start":"1490296463.031"} {"access_date":"2017-05-24