Re: Is it possible to run MirrorMaker in active/active/active?

2022-01-31 Thread Manoj.Agrawal2
just want to understand that you are talking about below scenario

Mirroring data between A <->B and B<>C  correct ?


From: Doug Whitfield 
Sent: Monday, January 31, 2022 2:18 PM
To: users@kafka.apache.org 
Subject: Re: Is it possible to run MirrorMaker in active/active/active?

[External]


Hi Ryanne,

I think you are probably correct, but just for clarity, you are talking about a 
data mesh, not a service mesh, correct?

Best Regards,
--

Doug Whitfield | Enterprise Architect, 
OpenLogic
Perforce 
Software
Visit us on: 
LinkedIn
 | 
Twitter
 | 
Facebook
 | 
YouTube



From: Ryanne Dolan 
Date: Monday, January 31, 2022 at 1:12 PM
To: Kafka Users 
Subject: Re: Is it possible to run MirrorMaker in active/active/active?
Doug, you can have any number of clusters with a fully-connected mesh
topology, which I think is what you are looking for.

Ryanne

On Mon, Jan 31, 2022, 12:44 PM Doug Whitfield 
wrote:

> Hi folks,
>
> Every example I have seen uses two clusters in active/active and testing
> suggests I can only get two clusters to run active/active.
>
> I think we will need to use a fan-in pattern if we want more than two
> clusters. Is that correct?
>
> Best Regards,
> --
>
> Doug Whitfield | Enterprise Architect, OpenLogic<
> 

Re: Upgrade from 2.0 to 2.8.1 failed

2022-01-25 Thread Manoj.Agrawal2
Are u getting error at kafka log or server.log ?

From: Nicolas Carlot 
Sent: Tuesday, January 25, 2022 6:20 AM
To: users@kafka.apache.org 
Subject: Upgrade from 2.0 to 2.8.1 failed

[External]


Hello everyone,

I just had a major failure while upgrading a kafka cluster from 2.0 to
2.8.1 following the provided migration process.
I understand that a topicId is now given to each topic within zookeeper and
meta.properties of each partition.
While describing the topic, it seems I have different topicId depending on
the zk node i'm querying:

[kafkaadm@lyn3e154(PFI):~ 13:16:28]$
/opt/java/j2ee/kafka/bin/kafka-topics.sh --zookeeper satezookeeperi1:62181
--describe --topic PARCEL360.LT
Topic: PARCEL360.LT TopicId: XjIuCqy2TcKu-M5smrz9iA PartitionCount: 10
 ReplicationFactor: 3Configs: compression.type=lz4
Topic: PARCEL360.LT Partition: 0Leader: 3   Replicas:
1,2,3 Isr: 3
Topic: PARCEL360.LT Partition: 1Leader: 3   Replicas:
2,3,1 Isr: 3
Topic: PARCEL360.LT Partition: 2Leader: 3   Replicas:
3,1,2 Isr: 3
Topic: PARCEL360.LT Partition: 3Leader: 3   Replicas:
1,2,3 Isr: 3
Topic: PARCEL360.LT Partition: 4Leader: 3   Replicas:
2,3,1 Isr: 3
Topic: PARCEL360.LT Partition: 5Leader: 3   Replicas:
3,1,2 Isr: 3
Topic: PARCEL360.LT Partition: 6Leader: 3   Replicas:
1,2,3 Isr: 3
Topic: PARCEL360.LT Partition: 7Leader: 3   Replicas:
2,3,1 Isr: 3
Topic: PARCEL360.LT Partition: 8Leader: 3   Replicas:
3,1,2 Isr: 3
Topic: PARCEL360.LT Partition: 9Leader: 3   Replicas:
1,2,3 Isr: 3
[kafkaadm@lyn3e154(PFI):~ 13:17:06]$
/opt/java/j2ee/kafka/bin/kafka-topics.sh --zookeeper satezookeeperi2:62181
--describe --topic PARCEL360.LT
Topic: PARCEL360.LT TopicId: zwbQDd9NRjGwq-v2twHfIQ PartitionCount: 10
 ReplicationFactor: 3Configs: compression.type=lz4
Topic: PARCEL360.LT Partition: 0Leader: 3   Replicas:
1,2,3 Isr: 3
Topic: PARCEL360.LT Partition: 1Leader: 3   Replicas:
2,3,1 Isr: 3
Topic: PARCEL360.LT Partition: 2Leader: 3   Replicas:
3,1,2 Isr: 3
Topic: PARCEL360.LT Partition: 3Leader: 3   Replicas:
1,2,3 Isr: 3
Topic: PARCEL360.LT Partition: 4Leader: 3   Replicas:
2,3,1 Isr: 3
Topic: PARCEL360.LT Partition: 5Leader: 3   Replicas:
3,1,2 Isr: 3
Topic: PARCEL360.LT Partition: 6Leader: 3   Replicas:
1,2,3 Isr: 3
Topic: PARCEL360.LT Partition: 7Leader: 3   Replicas:
2,3,1 Isr: 3
Topic: PARCEL360.LT Partition: 8Leader: 3   Replicas:
3,1,2 Isr: 3
Topic: PARCEL360.LT Partition: 9Leader: 3   Replicas:
1,2,3 Isr: 3
[kafkaadm@lyn3e154(PFI):~ 13:17:11]$
/opt/java/j2ee/kafka/bin/kafka-topics.sh --zookeeper satezookeeperi3:62181
--describe --topic PARCEL360.LT
Topic: PARCEL360.LT TopicId: XjIuCqy2TcKu-M5smrz9iA PartitionCount: 10
 ReplicationFactor: 3Configs: compression.type=lz4
Topic: PARCEL360.LT Partition: 0Leader: 3   Replicas:
1,2,3 Isr: 3
Topic: PARCEL360.LT Partition: 1Leader: 3   Replicas:
2,3,1 Isr: 3
Topic: PARCEL360.LT Partition: 2Leader: 3   Replicas:
3,1,2 Isr: 3
Topic: PARCEL360.LT Partition: 3Leader: 3   Replicas:
1,2,3 Isr: 3
Topic: PARCEL360.LT Partition: 4Leader: 3   Replicas:
2,3,1 Isr: 3
Topic: PARCEL360.LT Partition: 5Leader: 3   Replicas:
3,1,2 Isr: 3
Topic: PARCEL360.LT Partition: 6Leader: 3   Replicas:
1,2,3 Isr: 3
Topic: PARCEL360.LT Partition: 7Leader: 3   Replicas:
2,3,1 Isr: 3
Topic: PARCEL360.LT Partition: 8Leader: 3   Replicas:
3,1,2 Isr: 3
Topic: PARCEL360.LT Partition: 9Leader: 3   Replicas:
1,2,3 Isr: 3


Any idea what's happening here ?




--



*Nicolas Carlot*
*Lead dev*Direction des Systèmes d'Information


3 boulevard Romain Rolland
75014 Paris

kafka schema registry - some queries and questions

2020-10-08 Thread Manoj.Agrawal2
 Hi All,

Wanted to understand a bit more on the schema registry
1. With Apache kafka , can we use  schema registry
2. Amazon MSK , can we use Schema registry ?

Thanks
Manoj A

This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Kafka Size of ISR Set(3) insufficient for min.isr 2

2020-09-27 Thread Manoj.Agrawal2
In order to produce the data , topic should be have min ISR =2 but look like 
ISR is out of sync . kafka cluster health is not good .
Topic: FooBar Partition: 0 Leader: 3 Replicas: 2,3,1 Isr: 3

|[root@LoremIpsum kafka]# /usr/lib/kafka/kafka/bin/kafka-topics.sh
--bootstrap-server localhost:9092 --describe --topic FooBar Topic:
FooBar PartitionCount: 1 ReplicationFactor: 3 Configs:

min.insync.replicas=2,cleanup.policy=compact,segment.bytes=1073741824,max.message.bytes=5242880,min.compaction.lag.ms=60480,message.timestamp.type=LogAppendTime,unclean.leader.election.enable=false
Topic: FooBar Partition: 0 Leader: 3 Replicas: 2,3,1 Isr: 3 |

On 9/26/20, 9:55 PM, "Franz van Betteraey"  wrote:

[External]


Hi all,



I have a strange Kafka Server error when mirroring data with the
MirrorMaker 1 in Apache Kafka 2.6.

|org.apache.kafka.common.errors.NotEnoughReplicasException: The size of
the current ISR Set(3) is insufficient to satisfy the min.isr
requirement of 2 for partition FooBar-0 |

The strange thing is, that the |min.isr| setting is 2 and the ISR Set
has 3 nodes. Nevertheless I get the /NotEnoughReplicasException/ Exception.

Also taking a deeper look to the topic does not show any curiosities

|[root@LoremIpsum kafka]# /usr/lib/kafka/kafka/bin/kafka-topics.sh
--bootstrap-server localhost:9092 --describe --topic FooBar Topic:
FooBar PartitionCount: 1 ReplicationFactor: 3 Configs:

min.insync.replicas=2,cleanup.policy=compact,segment.bytes=1073741824,max.message.bytes=5242880,min.compaction.lag.ms=60480,message.timestamp.type=LogAppendTime,unclean.leader.election.enable=false
Topic: FooBar Partition: 0 Leader: 3 Replicas: 2,3,1 Isr: 3 |

The logs of the 3 nodes look normal (as far as I can judge). Is there
any other reason that could produce this message. What else could be
checked?

Thank you very much for any advice!

I also posted this question on SO here:

https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstackoverflow.com%2Fquestions%2F64080819%2Fkafka-size-of-isr-set3-insufficient-for-min-isr-2data=02%7C01%7CManoj.Agrawal2%40cognizant.com%7Cab4b6e4b451d4f28bd0b08d862a16b39%7Cde08c40719b9427d9fe8edf254300ca7%7C0%7C0%7C637367793205079127sdata=lfIyuovca3xo%2BVNlo57xMj5%2F7Gz0FCXx79hngOpmy0Y%3Dreserved=0

Kind regards,

   Franz



This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Should kafka be deleting unused replicas?

2020-09-15 Thread Manoj.Agrawal2
Look like this is bug ..
You can clean the data log on this node 3  and start the kafka process on node 
3 . This should resolve the issue

On 9/15/20, 8:09 PM, "Dima Brodsky"  wrote:

[External]


We are using version 2.3.1.

Two more pieces are  information wrt Luke's answer.  Assume that retention
of the data is Y and you turn on the node 3 after time Y so the data on
node 3 is old and would be deleted regardless, so there is no point in
doing another partition reassignment because the data is completely stale.

I would think the data should be deleted, but we are seeing that it is not.



On Tue, Sep 15, 2020 at 8:04 PM  wrote:

> It should delete the old data log based on retention of topic.
> What kafka version you are using ?
>
> On 9/15/20, 7:48 PM, "Dima Brodsky" 
> wrote:
>
> [External]
>
>
> Hi,
>
> I have a question, when you start kafka on a node, if there is a 
random
> replica log should it delete it on startup?  Here is an example:
> Assume
> you have a 4 node cluster.  Topic X has 3 replicas and it is
> replicated on
> nodes 1, 2, and 3.  Now you shutdown node 3 and you place  the replica
> that
> was on node 3 on node 4.  Then once everything is in-sync you start up
> node
> 3 again.  What should happen to the replica X on node 3.  Should kafka
> delete it or will it stick around forever.
>
> Given the above scenario we are seeing the replica stick around
> forever.
> Is this working as designed, or is this a bug?
>
> Thanks!
> ttyl
> Dima
>
> --
> dbrod...@salesforce.com
>
> "The price of reliability is the pursuit of the utmost simplicity.
> It is the price which the very rich find most hard to pay." (Sir 
Antony
> Hoare, 1980)
>
>
> This e-mail and any files transmitted with it are for the sole use of the
> intended recipient(s) and may contain confidential and privileged
> information. If you are not the intended recipient(s), please reply to the
> sender and destroy all copies of the original message. Any unauthorized
> review, use, disclosure, dissemination, forwarding, printing or copying of
> this email, and/or any action taken in reliance on the contents of this
> e-mail is strictly prohibited and may be unlawful. Where permitted by
> applicable law, this e-mail and other e-mail communications sent to and
> from Cognizant e-mail addresses may be monitored.
> This e-mail and any files transmitted with it are for the sole use of the
> intended recipient(s) and may contain confidential and privileged
> information. If you are not the intended recipient(s), please reply to the
> sender and destroy all copies of the original message. Any unauthorized
> review, use, disclosure, dissemination, forwarding, printing or copying of
> this email, and/or any action taken in reliance on the contents of this
> e-mail is strictly prohibited and may be unlawful. Where permitted by
> applicable law, this e-mail and other e-mail communications sent to and
> from Cognizant e-mail addresses may be monitored.
>


--
dbrod...@salesforce.com

"The price of reliability is the pursuit of the utmost simplicity.
It is the price which the very rich find most hard to pay." (Sir Antony
Hoare, 1980)


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Should kafka be deleting unused replicas?

2020-09-15 Thread Manoj.Agrawal2
It should delete the old data log based on retention of topic.
What kafka version you are using ?

On 9/15/20, 7:48 PM, "Dima Brodsky"  wrote:

[External]


Hi,

I have a question, when you start kafka on a node, if there is a random
replica log should it delete it on startup?  Here is an example:  Assume
you have a 4 node cluster.  Topic X has 3 replicas and it is replicated on
nodes 1, 2, and 3.  Now you shutdown node 3 and you place  the replica that
was on node 3 on node 4.  Then once everything is in-sync you start up node
3 again.  What should happen to the replica X on node 3.  Should kafka
delete it or will it stick around forever.

Given the above scenario we are seeing the replica stick around forever.
Is this working as designed, or is this a bug?

Thanks!
ttyl
Dima

--
dbrod...@salesforce.com

"The price of reliability is the pursuit of the utmost simplicity.
It is the price which the very rich find most hard to pay." (Sir Antony
Hoare, 1980)


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: MirrorMaker 2.0 - Translating offsets for remote topics and consumer groups

2020-09-15 Thread Manoj.Agrawal2
Hi Ryanne/Josh,

I'm working on active-active mirror maker and while translating  consumer 
offset from source- cluster A to dest cluster B. any pointer would be helpful .

Cluster A
Cluster Name--A
Topic name: testA
Consumer group name: mm-testA-consumer

Cluster -B
Cluster Name--B
Topic name: source .testA
Consumer group name: mm-testA-consumer

Using below API , I would like to translate consumer offset from cluster A to 
cluster B for consumer group - mm-testA-consumer


Map prop = new HashMap<>();
String  bootsStrapServer ="clusterB:9092";
String topic = "source.testA";
String groupId = "mm-testA-consumer";
prop.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,bootsStrapServer);
prop.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, 
StringDeserializer.class.getName());
prop.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, 
StringDeserializer.class.getName());
prop.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);

KafkaConsumer consumer= new KafkaConsumer(prop);
  // consumer.subscribe(Collections.singleton(topic));
consumer.subscribe(Arrays.asList(topic));
try {
Map newOffsets = 
RemoteClusterUtils.translateOffsets(prop, "A", groupId, Duration.ofMillis(55500)

);

System.out.println(newOffsets.toString());
newOffsets.forEach((topicPartition, offsetAndMetadata) -> 
consumer.seek(topicPartition, offsetAndMetadata));

}catch (Exception e) {
System.out.println(e.getMessage());
}
while(true){
   ConsumerRecords records = 
consumer.poll(Duration.ofMillis(100));
   for(ConsumerRecord record :records ){
  
logger.info("Key:"+record.key()+"Value:"+record.value()+"Offset:"+record.offset()+"Partition:"+record.partition());
  
System.out.println("Key:"+record.key()+"Value:"+record.value()+"  
Offset:"+record.offset()+"Partition:"+record.partition());
   }

}

}

I'm getting below error --
[main] WARN org.apache.kafka.clients.NetworkClient - [Consumer 
clientId=consumer-null-2, groupId=null] Error while fetching metadata with 
correlation id 3 : {A.checkpoints.internal=UNKNOWN_TOPIC_OR_PARTITION}
[main] WARN org.apache.kafka.clients.NetworkClient - [Consumer 
clientId=consumer-null-2, groupId=null] Error while fetching metadata with 
correlation id 4 : {A.checkpoints.internal=UNKNOWN_TOPIC_OR_PARTITION}
[main] WARN org.apache.kafka.clients.NetworkClient - [Consumer 
clientId=consumer-null-2, groupId=null] Error while fetching metadata with 
correlation id 5 : {A.checkpoints.internal=UNKNOWN_TOPIC_OR_PARTITION}
[main] WARN org.apache.kafka.clients.NetworkClient - [Consumer 
clientId=consumer-null-2, groupId=null] Error while fetching metadata with 
correlation id 6 : {A.checkpoints.internal=UNKNOWN_TOPIC_OR_PARTITION}
[main] WARN org.apache.kafka.clients.NetworkClient - [Consumer 
clientId=consumer-null-2, groupId=null] Error while fetching metadata with 
correlation id 7 : {A.checkpoints.internal=UNKNOWN_TOPIC_OR_PARTITION}
[main] WARN org.apache.kafka.clients.NetworkClient - [Consumer 
clientId=consumer-null-2, groupId=null] Error while fetching metadata with 
correlation id 8 : {A.checkpoints.internal=UNKNOWN_TOPIC_OR_PARTITION}
[main] WARN org.apache.kafka.clients.NetworkClient - [Consumer 
clientId=consumer-null-2, groupId=null] Error while fetching metadata with 
correlation id 9 : {A.checkpoints.internal=UNKNOWN_TOPIC_OR_PARTITION}


Any suggestion would be helpful .

On 8/21/20, 7:52 AM, "Ryanne Dolan"  wrote:

[External]


Josh, make sure there is a consumer in cluster B subscribed to A.topic1.
Wait a few seconds for a checkpoint to appear upstream on cluster A, and
then translateOffsets() will give you the correct offsets.

By default MM2 will block consumers that look like kafka-console-cosumer,
so make sure you specify a custom group ID when testing this.

Ryanne

On Thu, Aug 20, 2020, 11:21 AM Josh C  wrote:

> Thanks again Ryanne, I didn't realize that MM2 would handle that.
>
> However, I'm unable to mirror the remote topic back to the source cluster
> by adding it to the topic whitelist. I've also tried to update the topic
> blacklist and remove ".*\.replica" (since the blacklists take precedence
> over the whitelists), but that doesn't seem to be doing much either? Is
> there something else I should be aware of in the mm2.properties file?
>
> Appreciate all your help!
>
> Josh
>
> On Wed, Aug 19, 2020 at 12:55 PM Ryanne Dolan 
> wrote:
>
> > Josh, if you have two clusters with bidirectional replication, you only
> get
> > two copies of each record. MM2 won't replicate the data "upstream", cuz
> it
> > knows it's already there. In particular, MM2 knows not to create topics
> > like B.A.topic1 on cluster A, as this would be an 

Re: Mirror Maker 2.0 NOT generating checkpoints for consumers running in assign mode

2020-09-11 Thread Manoj.Agrawal2
Hi Ananya
Are you able to resolve this issue ,I'm also facing same issue .

What parameter should be pass here if I'm doing failover from cluster A ---> B

Map newOffsets =
RemoteClusterUtils.translateOffsets(properties, "A", 
"TestTopic-123", Duration.ofMillis(5500));



Properties= Bootstraps properties of cluster B
TestTopic-123= Topic name at cluster A

Thanks


On 9/7/20, 8:43 AM, "Ananya Sen"  wrote:

[External]


Hello All,

I was using a mirror maker 2.0. I was testing the consumer checkpointing 
functionality. I found that the RemoteClusterUtils.translateOffsets do not give 
checkpoints for the consumer which run in assign mode.

I am using mirror maker 2.0 of Kafka Version 2.5.0 and Scala version 2.12
My source Kafka setup is 1 broker 1 zookeeper having Kafka version 1.0.0. 
Scala version 2.11
My target Kafka setup is 1 broker 1 zookeeper having Kafka version 1.0.0. 
Scala version 2.11

I am only doing 1-way replication from my source cluster to the target 
cluster.

Mirror Maker Config:

clusters = A, B
A.bootstrap.servers = localhost:9082
B.bootstrap.servers = localhost:9092

A->B.enabled = true
A->B.topics = .*
A->B.groups = .*

B->A.enabled = false
B->A.topics = .*

replication.factor=1

checkpoints.topic.replication.factor=1
heartbeats.topic.replication.factor=1
offset-syncs.topic.replication.factor=1

offset.storage.replication.factor=1
status.storage.replication.factor=1
config.storage.replication.factor=1

emit.heartbeats.interval.seconds = 2
refresh.topics.interval.seconds=1
refresh.groups.interval.seconds=1
emit.checkpoints.interval.seconds=1
sync.topic.configs.enabled=true
sync.topic.configs.interval.seconds=1
replication.policy.class=com.ie.naukri.replicator.SimpleReplicationPolicy


In the replication policy, I have removed topic renaming and replicating 
the topic as it is (same name in target cluster as source cluster).


Steps to replicate:
=
1) Create a topic on the source cluster
2) Push some data in the topic using console producer
3) Start a consumer in assign mode to read from the above topic but only 
from 1 partition.

Properties properties = new Properties();
properties.setProperty(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, 
"localhost:9082");
properties.setProperty(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
ByteArrayDeserializer.class.getName());
properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
ByteArrayDeserializer.class.getName());
properties.setProperty(ConsumerConfig.GROUP_ID_CONFIG, "TestTopic-123");
properties.setProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, 
"earliest");
properties.setProperty(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "2");

KafkaConsumer consumer = new KafkaConsumer(properties);

TopicPartition tp = new TopicPartition("TestTopic-123", 1);
consumer.assign(Collections.singleton(tp));

while (true) {
  ConsumerRecords records = 
consumer.poll(Duration.ofMillis(500));
  for (ConsumerRecord record : records) {
System.out.println(new String(record.value()) + "__" + 
record.partition());
Thread.sleep(2000);
  }
}
  }

4) Stop consumer mid-way. Describe the consumer in the source cluster to 
get the lag information.

bin/kafka-consumer-groups.sh --describe --bootstrap-server localhost:9082 
--group TestTopic-123
GROUP   TOPIC   PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  
LAG
TestTopic-123   TestTopic-123   0  5   28  
23

5) Run translate offset method to print the downstream offset.

Map newOffsets =
RemoteClusterUtils.translateOffsets(properties, "A", 
"TestTopic-123", Duration.ofMillis(5500));
System.out.println(newOffsets.toString());

6) An empty map is returned

Expected Outcome: Translated Committed offset should have been returned.

My Debugging
===
On debugging the issue, I found that the checkpoint topic in the target 
cluster did not have this group's committed offset.

Tried multiple times with different commit frequency and topic/group name. 
It didn't work. Only consumer running in subscribe mode and console consumer 
with --group flag is giving checkpoint.

Question

1) Is it intended as functionality that the assign mode consumer can never 
be reset? Or is it a bug?


Any help would be greatly appreciated.




This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 

Re: Kafka compatibility with ZK

2020-09-03 Thread Manoj.Agrawal2
We also upgraded kafka 2.2.1 to kafka 2.5.0 and kept same zookeeper . no issued 
reported .
Later we also upgraded zookeeper to 3.5.8 . all good .

On 9/3/20, 8:42 PM, "Andrey Klochkov"  wrote:

[External]


Hello all,
FWIW we upgraded to Kafka 2.4.1 and kept ZK at 3.4.6, no issues noticed.

On Sun, Aug 2, 2020 at 10:04 AM Marina Popova
 wrote:

>
> Actually, I'm very interested in your experience as well I'm about to
> start the same (similar) upgrade - from Kafka 0.11/ZK3.4.13 to Kafka 
2.4/ZK
> 3.5.6
>
> I have Kafka and ZK as separate clusters.
>
> My plan is :
> 1. rolling upgrade the Kafka cluster to 2.4 - using the
> inter.broker.protocol.version set to 0.11 at first
> 2. rolling upgrade ZK cluster to 3.5.6
> 3. set inter.broker.protocol.version=2.4.0 and rolling restart the Kafka
> cluster again
>
> Anybody sees a problem with this approach?
>
>
> thanks,
> Marina
>
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Thursday, July 23, 2020 4:01 PM, Andrey Klochkov 
> wrote:
>
> > Hello,
> > We're upgrading our Kafka from 1.1.0 to 2.4.1 and I'm wondering if ZK
> needs
> > to be upgraded too (we're currently on 3.4.6). The upgrade guide says
> that
> > "kafka has switched to the XXX version of ZK" but never says if 
switching
> > to a newer ZK is mandatory or not. What are the guidelines on keeping
> Kafka
> > and ZK compatible?
> >
> >
> 
---
> >
> > Andrey Klochkov
>
>
>

--
Andrey Klochkov


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: UPGRADING ZOOKEEPER FROM 3.4.13 TO 3.5.7

2020-09-03 Thread Manoj.Agrawal2
Issue has been fixed by copying empty snapshot file to data dir .

Thanks .

On 9/2/20, 10:51 PM, "Enrico Olivelli"  wrote:

[External]


The official way to fix it is here

https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FZOOKEEPER-3056data=02%7C01%7CManoj.Agrawal2%40cognizant.com%7C4620b7fb3801455d8a8808d84fcd6318%7Cde08c40719b9427d9fe8edf254300ca7%7C0%7C0%7C637347090841081188sdata=ymbjZLn8teCdxSQVQGD084lFd6HKxOAW3b9F%2BLPMn%2BM%3Dreserved=0

Basically we have a flag to allow the boot even in that case.
I suggest you to upgrade to latest 3.5.8 and not to 3.5.7


Enrico

Il Gio 3 Set 2020, 03:51 Rijo Roy  ha scritto:

> Hi Manoj,
> I just faced it yesterday and resolved..
> Hope you are getting this error in one of the follower node, if yesPlease
> create a backup folder in your zookeeper data directory and move version_2
> that holds zookeeper data into the newly created backup folder.
> Starting the zookeeper process will sync create the version_2 folder into
> its data directory.
> Regards,Rijo Roy
> Sent from Yahoo Mail on Android
>
>   On Thu, 3 Sep 2020 at 2:57 am, manoj.agraw...@cognizant.com<
> manoj.agraw...@cognizant.com> wrote:   HI ALL ,
> I’m planning to upgrade the Kafka 2.2.1 to kafka 2.5.0 , I m getting below
> error while upgrading zookeeper version as below . Any idea ?
>
>
>
>
>
> java.io.IOException: No snapshot found, but there are log entries.
> Something is broken!
>
>   at
> 
org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:240)
>
>   at
> org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:240)
>
>   at
> 
org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:901)
>
>   at
> org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:887)
>
>   at
> 
org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:205)
>
>   at
> 
org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:123)
>
>   at
> 
org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
>
> 2020-09-02 21:19:23,877 - ERROR [main:QuorumPeerMain@101] - Unexpected
> exception, exiting abnormally
>
> java.lang.RuntimeException: Unable to run quorum server
>
>   at
> 
org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:938)
>
>   at
> org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:887)
>
>   at
> 
org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:205)
>
>   at
> 
org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:123)
>
>   at
> 
org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)
>
> Caused by: java.io.IOException: No snapshot found, but there are log
> entries. Something is broken!
>
>   at
> 
org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:240)
>
>   at
> org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:240)
>
>   at
> 
org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:901)
>
>   ... 4 more
>
> This e-mail and any files transmitted with it are for the sole use of the
> intended recipient(s) and may contain confidential and privileged
> information. If you are not the intended recipient(s), please reply to the
> sender and destroy all copies of the original message. Any unauthorized
> review, use, disclosure, dissemination, forwarding, printing or copying of
> this email, and/or any action taken in reliance on the contents of this
> e-mail is strictly prohibited and may be unlawful. Where permitted by
> applicable law, this e-mail and other e-mail communications sent to and
> from Cognizant e-mail addresses may be monitored. This e-mail and any 
files
> transmitted with it are for the sole use of the intended recipient(s) and
> may contain confidential and privileged information. If you are not the
> intended recipient(s), please reply to the sender and destroy all copies 
of
> the original message. Any unauthorized review, use, disclosure,
> dissemination, forwarding, printing or copying of this email, and/or any
> action taken in reliance on the contents of this e-mail is strictly
> prohibited and may be unlawful. Where permitted by applicable law, this
> e-mail and other e-mail communications sent to and from Cognizant e-mail
> addresses may be monitored.
>
>


This e-mail and any files transmitted with it are for the 

UPGRADING ZOOKEEPER FROM 3.4.13 TO 3.5.7

2020-09-02 Thread Manoj.Agrawal2
HI ALL ,
I’m planning to upgrade the Kafka 2.2.1 to kafka 2.5.0 , I m getting below 
error while upgrading zookeeper version as below . Any idea ?





java.io.IOException: No snapshot found, but there are log entries. Something is 
broken!

   at 
org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:240)

   at 
org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:240)

   at 
org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:901)

   at 
org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:887)

   at 
org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:205)

   at 
org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:123)

   at 
org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)

2020-09-02 21:19:23,877 - ERROR [main:QuorumPeerMain@101] - Unexpected 
exception, exiting abnormally

java.lang.RuntimeException: Unable to run quorum server

   at 
org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:938)

   at 
org.apache.zookeeper.server.quorum.QuorumPeer.start(QuorumPeer.java:887)

   at 
org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:205)

   at 
org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:123)

   at 
org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:82)

Caused by: java.io.IOException: No snapshot found, but there are log entries. 
Something is broken!

   at 
org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:240)

   at 
org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:240)

   at 
org.apache.zookeeper.server.quorum.QuorumPeer.loadDataBase(QuorumPeer.java:901)

   ... 4 more

This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored. This e-mail and any files transmitted with it are for the sole 
use of the intended recipient(s) and may contain confidential and privileged 
information. If you are not the intended recipient(s), please reply to the 
sender and destroy all copies of the original message. Any unauthorized review, 
use, disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: 回复: Kafka cluster cannot connect to zookeeper

2020-08-29 Thread Manoj.Agrawal2
Try below

1. Update conf/ zoo.cfg Configure the configuration of  exiting one and new 
one. server nodes

2. Add myid under dataDir

3. Restart the existing zookeeper node

4. Start the other one  zookeeper nodes
5. Update conf/ zoo.cfg Configure the configuration of existing  (2  
zookeeper) and   new one server nodes

6. Add myid under dataDir

7. Restart the existing zookeeper node

8. Start the other one  zookeeper nodes





On 8/29/20, 3:52 AM, "Li,Dingqun"  wrote:

[External]


I updated zookeeper's process

1. Update conf/ zoo.cfg Configure the configuration of two new server nodes

2. Add myid under dataDir

3. Restart the existing zookeeper node

4. Start the other two zookeeper nodes

5. The existing zookeeper node is changed from stand-alone to leader

zookeeper version 3.4.14-4
kafka version 2.3.0

This is part of the log:
[2020-08-28 08:43:23,872] INFO Got user-level KeeperException when 
processing sessionid:0x7d0679e7c0a90004 type:create cxid:0x5 zxid:0x205e0 
txntype:-1 reqpath:n/a Error Path:/brokers/topics Error:KeeperErrorCode = 
NodeExists for /brokers/topics (org.apache.zookeeper.server.PrepRequestProcesso 
   r)
181 [2020-08-28 08:43:23,945] INFO Got user-level KeeperException when 
processing sessionid:0x7d0679e7c0a90004 type:create cxid:0x6 zxid:0x205e1 
txntype:-1 reqpath:n/a Error Path:/config/changes Error:KeeperErrorCode = 
NodeExists for /config/changes (org.apache.zookeeper.server.PrepRequestProcesso 
   r)
182 [2020-08-28 08:43:24,018] INFO Got user-level KeeperException when 
processing sessionid:0x7d0679e7c0a90004 type:create cxid:0x7 zxid:0x205e2 
txntype:-1 reqpath:n/a Error Path:/admin/delete_topics 
Error:KeeperErrorCode = NodeExists for /admin/delete_topics 
(org.apache.zookeeper.server.PrepRequestProcessor)
183 [2020-08-28 08:43:24,092] INFO Got user-level KeeperException when 
processing sessionid:0x7d0679e7c0a90004 type:create cxid:0x8 zxid:0x205e3 
txntype:-1 reqpath:n/a Error Path:/brokers/seqid Error:KeeperErrorCode = 
NodeExists for /brokers/seqid (org.apache.zookeeper.server.PrepRequestProcessor)
184 [2020-08-28 08:43:24,166] INFO Got user-level KeeperException when 
processing sessionid:0x7d0679e7c0a90004 type:create cxid:0x9 zxid:0x205e4 
txntype:-1 reqpath:n/a Error Path:/isr_change_notification 
Error:KeeperErrorCode = NodeExists for /isr_change_notification 
(org.apache.zookeeper.server.PrepRequestProcessor)
185 [2020-08-28 08:43:24,240] INFO Got user-level KeeperException when 
processing sessionid:0x7d0679e7c0a90004 type:create cxid:0xa zxid:0x205e5 
txntype:-1 reqpath:n/a Error Path:/latest_producer_id_block 
Error:KeeperErrorCode = NodeExists for /latest_producer_id_block 
(org.apache.zookeeper.server.PrepRequestProcessor)
186 [2020-08-28 08:43:24,313] INFO Got user-level KeeperException when 
processing sessionid:0x7d0679e7c0a90004 type:create cxid:0xb zxid:0x205e6 
txntype:-1 reqpath:n/a Error Path:/log_dir_event_notification 
Error:KeeperErrorCode = NodeExists for /log_dir_event_notification 
(org.apache.zookeeper.server.PrepRequestProcessor)
187 [2020-08-28 08:43:24,388] INFO Got user-level KeeperException when 
processing sessionid:0x7d0679e7c0a90004 type:create cxid:0xc zxid:0x205e7 
txntype:-1 reqpath:n/a Error Path:/config/topics Error:KeeperErrorCode = 
NodeExists for /config/topics (org.apache.zookeeper.server.PrepRequestProcessor)
188 [2020-08-28 08:43:24,461] INFO Got user-level KeeperException when 
processing sessionid:0x7d0679e7c0a90004 type:create cxid:0xd zxid:0x205e8 
txntype:-1 reqpath:n/a Error Path:/config/clients Error:KeeperErrorCode = 
NodeExists for /config/clients (org.apache.zookeeper.server.PrepRequestProcesso 
   r)
189 [2020-08-28 08:43:24,534] INFO Got user-level KeeperException when 
processing sessionid:0x7d0679e7c0a90004 type:create cxid:0xe zxid:0x205e9 
txntype:-1 reqpath:n/a Error Path:/config/users Error:KeeperErrorCode = 
NodeExists for /config/users (org.apache.zookeeper.server.PrepRequestProcessor)
190 [2020-08-28 08:43:24,607] INFO Got user-level KeeperException when 
processing sessionid:0x7d0679e7c0a90004 type:create cxid:0xf zxid:0x205ea 
txntype:-1 reqpath:n/a Error Path:/config/brokers Error:KeeperErrorCode = 
NodeExists for /config/brokers (org.apache.zookeeper.server.PrepRequestProcesso 
   r)
191 [2020-08-28 08:45:03,991] WARN Connection broken for id 74, my id = 
185, error =  (org.apache.zookeeper.server.quorum.QuorumCnxManager)
192 java.io.EOFException
193 at java.io.DataInputStream.readInt(DataInputStream.java:392)
194 at 
org.apache.zookeeper.server.quorum.QuorumCnxManager$RecvWorker.run(QuorumCnxManager.java:1010)
195 [2020-08-28 08:45:03,991] WARN Interrupting SendWorker 
(org.apache.zookeeper.server.quorum.QuorumCnxManager)
196 

Read only Access(ACL) for all topics in cluster to user

2020-08-28 Thread Manoj.Agrawal2
 Hi ,
We are using kafka 2.2.1 and we have requirement to provide  read only access 
to user a  for all topics existing in kafka cluster . is there any way we can 
add KAFKA ACL rule for read access at cluster level or  all topic* to user .
Thanks
Manoj A

This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Kafka cluster cannot connect to zookeeper

2020-08-28 Thread Manoj.Agrawal2
You have'nt describe how you are adding zookeeper .
Right way to add zookeeper

One host at a time
1. update the existing zookeeper node conf/zoo.cfg  by adding  new host
2. restart the zk process on existing host
3. start the zk process in new node

On 8/28/20, 8:20 AM, "Li,Dingqun"  wrote:

[External]


We have one zookeeper node and two Kafka nodes. After that, we expand the 
capacity of zookeeper: change the configuration of zookeeper node, restart it, 
and add two zookeeper nodes. After that, my Kafka cluster could not connect to 
the zookeeper cluster, and there was no information available in the log.

What should we do? Thank you


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Can we use VIP ip rather than Kafka Broker host name in bootstrap string

2020-08-26 Thread Manoj.Agrawal2
 Hi All ,
Can we use VIP ip rather than Kafka Broker host name in bootstrap string  at 
producer side ?
Any concern or recommendation way

This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Not able to connect to bootstrap server when one broker down

2020-08-25 Thread Manoj.Agrawal2
What error you are getting , can you share the exact error ?
What is version of kafka lib at client side ?

On 8/25/20, 7:50 AM, "Prateek Rajput"  
wrote:

[External]


Hi, please if anyone can help, will be a huge favor.

*Regards,*
*Prateek Rajput* 


On Tue, Aug 25, 2020 at 12:06 AM Prateek Rajput 

wrote:

> Hi everyone,
> I am new to Kafka, and recently started working on kafka in my company. We
> recently migrated our client and cluster from the *0.10.x* version to
> *2.3.0*. I am facing this issue quite often.
> I have provided all brokers in *bootstrap.servers* config to instantiate
> the producer client but while using this client for batch publishing,
> sometimes some of my mappers get stuck.
> I debugged and found that one broker was down (for some maintenance
> activity). Now it was getting stuck because the mapper's client was trying
> to connect to that node only for the very first time. And it was failing
> with NoRouteToHost Exception.
> I have read that the very first time the client will select a random
> broker and will try to connect with that broker to get the meta-data of 
the
> whole cluster. Is there any way so that on such exceptions it can switch 
to
> another node dynamically and should not try to connect to the same box
> again and again.
>
> *Regards,*
> *Prateek Rajput* 
>

--




*-*


*This email and any files transmitted with it are confidential and
intended solely for the use of the individual or entity to whom they are
addressed. If you have received this email in error, please notify the
system manager. This message contains confidential information and is
intended only for the individual named. If you are not the named addressee,
you should not disseminate, distribute or copy this email. Please notify
the sender immediately by email if you have received this email by mistake
and delete this email from your system. If you are not the intended
recipient, you are notified that disclosing, copying, distributing or
taking any action in reliance on the contents of this information is
strictly prohibited.*

 

*Any views or opinions presented in this
email are solely those of the author and do not necessarily represent those
of the organization. Any information on shares, debentures or similar
instruments, recommended product pricing, valuations and the like are for
information purposes only. It is not meant to be an instruction or
recommendation, as the case may be, to buy or to sell securities, products,
services nor an offer to buy or sell securities, products or services
unless specifically stated to be so on behalf of the Flipkart group.
Employees of the Flipkart group of companies are expressly required not to
make defamatory statements and not to infringe or authorise any
infringement of copyright or any other legal right by email communications.
Any such communication is contrary to organizational policy and outside the
scope of the employment of the individual concerned. The organization will
not accept any liability in respect of such communication, and the employee
responsible will be personally liable for any damages or other liability
arising.*

 

*Our organization accepts no liability for the
content of this email, or for the consequences of any actions taken on the
basis of the information *provided,* unless that information is
subsequently confirmed in writing. If you are not the intended recipient,
you are notified that disclosing, copying, distributing or taking any
action in reliance on the contents of this information is strictly
prohibited.*



_-_



This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, 

Re: Steps & best-practices to upgrade Confluent Kafka 4.1x to 5.3x

2020-08-19 Thread Manoj.Agrawal2
Great .
Share your finding  to this group  once you done upgrade Confluent Kafka 4.1x 
to 5.3x successfully .

I see many people having  same question here .

On 8/19/20, 10:38 AM, "Rijo Roy"  wrote:

[External]


Thanks Manoj!

Yeah, the plan is to start with non-prod and validate first before going to 
prod.

Thanks & Regards,
Rijo Roy

On 2020/08/19 17:33:53,  wrote:
> I advise to do it non-prod for validation .
> You can backup data log folder if you want but I have'nt see any issue . 
but better to backup data if it small .
>
> Don’t change below value to latest until you done full validation , once 
you changed  to latest then you can't rollback .
>
> inter.broker.protocol.version=2.1.x
>
> On 8/19/20, 9:52 AM, "Rijo Roy"  wrote:
>
> [External]
>
>
> Thanks Manoj! Appreciate your help..
>
> I will follow the steps you pointed out..
>
> Do you think there is a need to :
> 1. backup the data before the rolling upgrade
> 2. some kind of datasync that should be considered here.. I don't 
think this is required as I am performing an in-place upgrade..
>
> Thanks & Regards,
> Rijo Roy
>
> On 2020/08/18 20:45:42,  wrote:
> > You can follow below steps
> >
> > 1. set inter.broker.protocol.version=2.1.x  and rolling restart 
kafka
> > 2. Rolling upgrade the Kafka cluster to 2.5 -
> > 3. rolling upgrade ZK cluster
> > Validate the kafka .
> >
> > 4. set inter.broker.protocol.version= new version and rolling 
restart the Kafka
> >
> >
> >
> > On 8/18/20, 12:54 PM, "Rijo Roy"  wrote:
> >
> > [External]
> >
> >
> > Hi,
> >
> > I am a newbie in Kafka and would greatly appreciate if someone 
could help with best-practices and steps to upgrade to v5.3x.
> >
> > Below is my existing set-up:
> > OS version:  Ubuntu 16.04.6 LTS
> > ZooKeeper version : 3.4.10
> > Kafka version : confluent-kafka-2.11 / 1.1.1-cp2 / v4.1.1
> >
> > We need to upgrade our OS version to Ubuntu 18.04 LTS whose 
minimum requirement is to upgrade Kafka to v5.3x. Could someone please help me 
with the best-practices & steps for the upgrade..
> >
> > Please let me know if you need any more information so that you 
could help me.
> >
> > Appreciate your help!
> >
> > Thanks & Regards,
> > Rijo Roy
> >
> >
> >
> > This e-mail and any files transmitted with it are for the sole use 
of the intended recipient(s) and may contain confidential and privileged 
information. If you are not the intended recipient(s), please reply to the 
sender and destroy all copies of the original message. Any unauthorized review, 
use, disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
> > This e-mail and any files transmitted with it are for the sole use 
of the intended recipient(s) and may contain confidential and privileged 
information. If you are not the intended recipient(s), please reply to the 
sender and destroy all copies of the original message. Any unauthorized review, 
use, disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
> >
>
>
> This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
> This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, 

Re: Steps & best-practices to upgrade Confluent Kafka 4.1x to 5.3x

2020-08-19 Thread Manoj.Agrawal2
I advise to do it non-prod for validation .
You can backup data log folder if you want but I have'nt see any issue . but 
better to backup data if it small .

Don’t change below value to latest until you done full validation , once you 
changed  to latest then you can't rollback .

inter.broker.protocol.version=2.1.x

On 8/19/20, 9:52 AM, "Rijo Roy"  wrote:

[External]


Thanks Manoj! Appreciate your help..

I will follow the steps you pointed out..

Do you think there is a need to :
1. backup the data before the rolling upgrade
2. some kind of datasync that should be considered here.. I don't think 
this is required as I am performing an in-place upgrade..

Thanks & Regards,
Rijo Roy

On 2020/08/18 20:45:42,  wrote:
> You can follow below steps
>
> 1. set inter.broker.protocol.version=2.1.x  and rolling restart kafka
> 2. Rolling upgrade the Kafka cluster to 2.5 -
> 3. rolling upgrade ZK cluster
> Validate the kafka .
>
> 4. set inter.broker.protocol.version= new version and rolling restart the 
Kafka
>
>
>
> On 8/18/20, 12:54 PM, "Rijo Roy"  wrote:
>
> [External]
>
>
> Hi,
>
> I am a newbie in Kafka and would greatly appreciate if someone could 
help with best-practices and steps to upgrade to v5.3x.
>
> Below is my existing set-up:
> OS version:  Ubuntu 16.04.6 LTS
> ZooKeeper version : 3.4.10
> Kafka version : confluent-kafka-2.11 / 1.1.1-cp2 / v4.1.1
>
> We need to upgrade our OS version to Ubuntu 18.04 LTS whose minimum 
requirement is to upgrade Kafka to v5.3x. Could someone please help me with the 
best-practices & steps for the upgrade..
>
> Please let me know if you need any more information so that you could 
help me.
>
> Appreciate your help!
>
> Thanks & Regards,
> Rijo Roy
>
>
>
> This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
> This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
>


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Steps & best-practices to upgrade Confluent Kafka 4.1x to 5.3x

2020-08-18 Thread Manoj.Agrawal2
You can follow below steps

1. set inter.broker.protocol.version=2.1.x  and rolling restart kafka
2. Rolling upgrade the Kafka cluster to 2.5 -
3. rolling upgrade ZK cluster
Validate the kafka .

4. set inter.broker.protocol.version= new version and rolling restart the Kafka



On 8/18/20, 12:54 PM, "Rijo Roy"  wrote:

[External]


Hi,

I am a newbie in Kafka and would greatly appreciate if someone could help 
with best-practices and steps to upgrade to v5.3x.

Below is my existing set-up:
OS version:  Ubuntu 16.04.6 LTS
ZooKeeper version : 3.4.10
Kafka version : confluent-kafka-2.11 / 1.1.1-cp2 / v4.1.1

We need to upgrade our OS version to Ubuntu 18.04 LTS whose minimum 
requirement is to upgrade Kafka to v5.3x. Could someone please help me with the 
best-practices & steps for the upgrade..

Please let me know if you need any more information so that you could help 
me.

Appreciate your help!

Thanks & Regards,
Rijo Roy



This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Finding consumer Lag

2020-08-11 Thread Manoj.Agrawal2
Can you please share what action you are performing and how ?

On 8/11/20, 10:19 PM, "Indu  V"  wrote:

[External]


Hi Team,

I am facing an issue in a clustered Kafka environment,

org.apache.kafka.common.KafkaException: Cannot perform send because at 
least one previous transactional or idempotent request has failed with errors.
at 
org.apache.kafka.clients.producer.internals.TransactionManager.failIfNotReadyForSend(TransactionManager.java:279)
 ~[kafka-clients-2.0.1.jar:?]

Brokers - 3
Replication factor - 3
ISR - 2

Kafka version - kafka_2.12-2.5.0

Please help me to solve this issue.

Regards,
Indu V

This electronic mail (including any attachment thereto) may be confidential 
and privileged and is intended only for the individual or entity named above. 
Any unauthorized use, printing, copying, disclosure or dissemination of this 
communication may be subject to legal restriction or sanction. Accordingly, if 
you are not the intended recipient, please notify the sender by replying to 
this email immediately and delete this email (and any attachment thereto) from 
your computer system...Thank You.


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: backup

2020-08-09 Thread Manoj.Agrawal2
I'm also working on mirror maker 2.0 . Do you have any documentation for mirror 
maker 2.0 config setup or can you share the mirror maker 2.0 config .
Have you encounter any issue

On 8/9/20, 5:02 AM, "Liam Clarke-Hutchinson"  wrote:

[External]


Hi Dor,

Yep, we're using Mirrormaker 2.0 currently, migrated from MM 1 with no real
issues. Admittedly the documentation is a bit lacking currently, but
between the KIP


and
the occasional code reading, we got there fine. One caveat to my experience
- it's built on top of Kafka Connect, and as we already had a cluster of KC
workers to stream data into/out of our Kafka cluster from various sources,
MM 2 was easy to deploy, just another config. So if you're starting from
scratch, there might be some overhead around getting your Kafka Connect
workers deployed and configured correctly. But if you're looking to run a
hot/warm (main/backup) cluster scenario, MM 2 is ideal.

Kind regards,

Liam Clarke-Hutchinson

On Sun, Aug 9, 2020 at 11:56 PM Dor Ben Dov  wrote:

> Hi Liam,
> No actual problem just wondering, still you answered most of the things I
> already know so no I am convinced that I am ok.
> Still, wondering about the mmk2. How reliable is it, have you used it in
> production for instance?
>
> Regards,
> Dor
>
> -Original Message-
> From: Liam Clarke-Hutchinson 
> Sent: Sunday, August 9, 2020 2:52 PM
> To: users@kafka.apache.org
> Subject: Re: backup
>
> Hi Dor,
>
> There are multiple approaches.
>
> 1) Clone your Kafka broker volumes
> 2) Use Kafka Connect to stream all data to a different storage system such
> as Hadoop, S3, etc.
> 3) Use Mirrormaker to replicate all data to a backup cluster.
>
> Which approach is right for you really depends on your needs, but
> generally, if you have enough nodes in your clusters, and a correct
> replication setting for a topic, you won't need to backup Kafka. As a rule
> of thumb, a topic with a replication factor of N can survive N - 1 node
> failures without data loss.
>
> If you can provide more information about the problems you're trying to
> solve, our advice can be more directed :)
>
> Kind regards,
>
> Liam Clarke-Hutchinson
>
> On Sun, Aug 9, 2020 at 11:43 PM Dor Ben Dov 
> wrote:
>
> > Hi All,
> > What is the best recommended way, and tool to backup kafka in 
production?
> > Regards,
> > Dor
> >
> > This email and the information contained herein is proprietary and
> > confidential and subject to the Amdocs Email Terms of Service, which
> > you may review at
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.
> > amdocs.com%2Fabout%2Femail-terms-of-servicedata=02%7C01%7Cdor.ben
> > -dov%40amdocs.com%7C41b5e7ba2ac947d8f92208d83c5ac7c3%7Cc8eca3ca127646d
> > 59d9da0f2a028920f%7C0%7C0%7C637325707889579052sdata=jXXwnZCqI5bjQ
> > 8beZTR8WM7l1yhEPkuSQJ%2FMqfJlz00%3Dreserved=0 <
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.
> > amdocs.com%2Fabout%2Femail-terms-of-servicedata=02%7C01%7Cdor.ben
> > -dov%40amdocs.com%7C41b5e7ba2ac947d8f92208d83c5ac7c3%7Cc8eca3ca127646d
> > 59d9da0f2a028920f%7C0%7C0%7C637325707889579052sdata=jXXwnZCqI5bjQ
> > 8beZTR8WM7l1yhEPkuSQJ%2FMqfJlz00%3Dreserved=0>
> >
> >
> This email and the information contained herein is proprietary and
> confidential and subject to the Amdocs Email Terms of Service, which you
> may review at 
https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.amdocs.com%2Fabout%2Femail-terms-of-servicedata=02%7C01%7CManoj.Agrawal2%40cognizant.com%7Cf0d6e338899a47c5eb5a08d83c5c1023%7Cde08c40719b9427d9fe8edf254300ca7%7C0%7C0%7C637325713403576927sdata=GR9B57ECse8QETY8nAjmPn7w12n5ezMCj12%2FmP1BQd4%3Dreserved=0
 <
> 
https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.amdocs.com%2Fabout%2Femail-terms-of-servicedata=02%7C01%7CManoj.Agrawal2%40cognizant.com%7Cf0d6e338899a47c5eb5a08d83c5c1023%7Cde08c40719b9427d9fe8edf254300ca7%7C0%7C0%7C637325713403586918sdata=Q%2BARIp1ASnZdXhtkCGot8gjs2%2BeW%2BVjmsPek7g%2FUPqw%3Dreserved=0>
>
>


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any 

Re: Kafka topic partition distributing evenly on disks

2020-08-07 Thread Manoj.Agrawal2
Or manually you can move data dir  . I'm assuming you have  replica >1
Stop the kafka process on broker 1
Move 1 or 2  dir log from Disk 1 to disk 2
And start the kafka process

Wait for ISR sync

Then you can repeate this step again .

On 8/7/20, 6:45 AM, "William Reynolds"  
wrote:

[External]


Hmm, that's odd, I am sure it was in the docs previously. Here is the
KIP on it 
https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-113%253A%2BSupport%2Breplicas%2Bmovement%2Bbetween%2Blog%2Bdirectoriesdata=02%7C01%7CManoj.Agrawal2%40cognizant.com%7C3c313758d6c44da817ac08d83ad8262f%7Cde08c40719b9427d9fe8edf254300ca7%7C0%7C0%7C637324047321152477sdata=tTYPxMp%2FmZ9ufSXQqY%2FbDwAIAG4ZNxRrc7fFq3EEvSg%3Dreserved=0
Basically the reassignment json that you get looks like this from the
initial generation and if you already have a realignment file you can
just add the log dirs section to each partition entry

{
  "version" : int,
  "partitions" : [
{
  "topic" : str,
  "partition" : int,
  "replicas" : [int],
  "log_dirs" : [str]<-- NEW. A log directory can be either
"any", or a valid absolute path that begins with '/'. This is an
optional filed. It is treated as an array of "any" if this field is
not explicitly specified in the json file.
},
...
  ]
}

Hope that helps
William

On 07/08/2020, Péter Nagykátai  wrote:
> Thank you William,
>
> I checked the doc and don't see any instructions regarding disks. Should I
> simply "move around" the topics and Kafka will assign the topics evenly on
> the two disks (per broker)? The current setup looks like this (for the
> topic in question, 15 primary, replica partitions):
>
> Broker 1 - disk 1: 8 partition
> Broker 1 - disk 2: 2 partition
>
> Broker 2 - disk 1: 8 partition
> Broker 2 - disk 2: 2 partition
>
> Broker 3 - disk 1: 8 partition
> Broker 3 - disk 2: 2 partition
>
> Thanks!
>
> On Fri, Aug 7, 2020 at 1:01 PM William Reynolds <
> william.reyno...@instaclustr.com> wrote:
>
>> Hi Péter,
>> Sounds like time to reassign the partitions you have across all the
>> brokers/data dirs using the instructions from here
>> 
https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fkafka.apache.org%2Fdocumentation%2F%23basic_ops_automigratedata=02%7C01%7CManoj.Agrawal2%40cognizant.com%7C3c313758d6c44da817ac08d83ad8262f%7Cde08c40719b9427d9fe8edf254300ca7%7C0%7C0%7C637324047321162468sdata=yWH5xhV8GXsTAOubFU1QmkuMlChpx6DVNe%2BKPpe5bwk%3Dreserved=0.
 That
>> assumes that your partition strategy has somewhat evenly filled your
>> partitions and given it may move all the partitions it could be a bit
>> intensive so be sure to use the throttle option.
>> Cheers
>> William
>>
>> On 07/08/2020, Péter Nagykátai  wrote:
>> > Hello everybody,
>> >
>> > Thank you for the detailed answers. My issue is partly answered here:
>> >
>> >
>> >
>> >
>> > *This rule also applies to disk-level, which means that when a set
>> > ofpartitions assigned to a specific broker, each of the disks will get
>> > thesame number of partitions without considering the load of disks at
>> > thattime.*
>> >
>> >  I admit, I didn't provide enough info either.
>> >
>> > So my problem is that an existing topic got a huge surge of events for
>> this
>> > week. I knew that'll happen and I modified the partition count.
>> > Unfortunately, it occurred to me a bit later, that I'll likely need
>> > some
>> > extra disk space. So I added an extra disk to each broker. The thing I
>> > didn't know, that Kafka won't evenly distribute the partitions on the
>> > disks.
>> > So the question still remains:
>> >  Is there any way to have Kafka evenly distribute data on its disks?
>> > Also, what options do I have *after *I'm in the situation I described
>> > above? (preferably without deleting the topic)
>> >
>> > Thanks!
>> >
>> > On Fri, Aug 7, 2020 at 12:00 PM Yingshuan Song
>> > 
>> > wrote:
>> >
>> >> Hi Peter,
>> >> Agreed with Manoj and Vinicius, i think those rules led to this result
>> >> :
>> >>
>> >> 1)the partitions of a topic - N and replication number - R determine
>> >> the
>> >> real partition-replica count of this topic, which is N * R;
>> >> 2)   kafka can distribute partitions evenly among brokers, but it is
>> >> based
>> >> on the broker count when the topic was created, this is important.
>> >> If we create a topic (N - 4, R - 3) in a kafka cluster which contains
>> >> 3
>> >> kafka brokers, then 4 * 3 / 3 = 4 partitions will be assigned to each
>> >> broker.
>> >> But if a new broker was added into 

Re: compatible kafka version to use when using with logstash 7.5.1-1

2020-08-06 Thread Manoj.Agrawal2
Are you getting any error at kafka broker or producing/consuming message ?
Can you please provide more detail how did you upgrade or what error you are 
getting . it all depend how did you upgraded ?



On 8/6/20, 4:13 PM, "Satish Kumar"  wrote:

[External]


Hello,

I upgraded kafka from 0.10 to 2.5.0 and also I upgraded logstash from 2.4
to 7.5

when I have kafka 1.10 and logstash 2.4 the messages used to forward
without any problems. But after the upgrade I'm getting errors in both
logstash and kafka logs so I would like to know what is the compatible
kafka version to use with logstash ( logstash is using kafka-integration
plugin 10.0.0). Please let me know what the compatible version is. I will
do upgrades/downgrades according to that.


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Kafka topic partition distributing evenly on disks

2020-08-06 Thread Manoj.Agrawal2
What do you mean older disk ?

On 8/6/20, 12:05 PM, "Péter Nagykátai"  wrote:

[External]


Yeah, but it doesn't do that. My "older" disks have ~70 partitions, the
newer ones ~5 partitions. That's why I'm asking what went wrong.

On Thu, Aug 6, 2020 at 8:35 PM  wrote:

> Kafka  evenly distributed number of partition on each disk so in your case
> every disk should have 3/2 topic partitions .
> It is producer job to evenly produce data by partition key  to topic
> partition .
> How it partition key , it is auto generated or producer sending key along
> with message .
>
>
> On 8/6/20, 7:29 AM, "Péter Nagykátai"  wrote:
>
> [External]
>
>
> Hello,
>
> I have a Kafka cluster with 3 brokers (v2.3.0) and each broker has 2
> disks
> attached. I added a new topic (heavyweight) and was surprised that
> even if
> the topic has 15 partitions, those weren't distributed evenly on the
> disks.
> Thus I got one disk that's almost empty and the other almost filled
> up. Is
> there any way to have Kafka evenly distribute data on its disks?
>
> Thank you!
>
>
> This e-mail and any files transmitted with it are for the sole use of the
> intended recipient(s) and may contain confidential and privileged
> information. If you are not the intended recipient(s), please reply to the
> sender and destroy all copies of the original message. Any unauthorized
> review, use, disclosure, dissemination, forwarding, printing or copying of
> this email, and/or any action taken in reliance on the contents of this
> e-mail is strictly prohibited and may be unlawful. Where permitted by
> applicable law, this e-mail and other e-mail communications sent to and
> from Cognizant e-mail addresses may be monitored.
> This e-mail and any files transmitted with it are for the sole use of the
> intended recipient(s) and may contain confidential and privileged
> information. If you are not the intended recipient(s), please reply to the
> sender and destroy all copies of the original message. Any unauthorized
> review, use, disclosure, dissemination, forwarding, printing or copying of
> this email, and/or any action taken in reliance on the contents of this
> e-mail is strictly prohibited and may be unlawful. Where permitted by
> applicable law, this e-mail and other e-mail communications sent to and
> from Cognizant e-mail addresses may be monitored.
>


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Kafka topic partition distributing evenly on disks

2020-08-06 Thread Manoj.Agrawal2
Kafka  evenly distributed number of partition on each disk so in your case 
every disk should have 3/2 topic partitions .
It is producer job to evenly produce data by partition key  to topic partition .
How it partition key , it is auto generated or producer sending key along with 
message .


On 8/6/20, 7:29 AM, "Péter Nagykátai"  wrote:

[External]


Hello,

I have a Kafka cluster with 3 brokers (v2.3.0) and each broker has 2 disks
attached. I added a new topic (heavyweight) and was surprised that even if
the topic has 15 partitions, those weren't distributed evenly on the disks.
Thus I got one disk that's almost empty and the other almost filled up. Is
there any way to have Kafka evenly distribute data on its disks?

Thank you!


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: RecordTooLargeException with old (0.10.0.0) consumer

2020-07-28 Thread Manoj.Agrawal2
Hi ,

You also make to change at producer and consumer side as well



server.properties:

message.max.bytes=15728640

replica.fetch.max.bytes=15728640

max.request.size=15728640

fetch.message.max.bytes=15728640

and producer.properties:

max.request.size=15728640



consumer

max.partition.fetch.bytes



On 7/28/20, 9:51 AM, "Thomas Becker"  wrote:



[External]





We have some legacy applications using an old (0.10.0.0) version of the 
consumer that are hitting RecordTooLargeExceptions with the following message:



org.apache.kafka.common.errors.RecordTooLargeException: There are some 
messages at [Partition=Offset]: {mytopic-0=13920987} whose size is larger than 
the fetch size 1048576 and hence cannot be ever returned. Increase the fetch 
size, or decrease the maximum message size the broker will allow.



We have not increased the maximum message size on either the broker nor 
topic level, and I'm quite confident no messages approaching that size are in 
the topic. Further, even if I increase the max.partition.fetch.bytes to a very 
large value such as Integer.MAX_VALUE, the error still occurs. I stumbled 
across 
https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FKAFKA-4762data=02%7C01%7CManoj.Agrawal2%40cognizant.com%7C378f39e5a4054728a5b908d83316403a%7Cde08c40719b9427d9fe8edf254300ca7%7C0%7C0%7C637315519004235818sdata=JgvvsAxIRZUrpUhPAITjZ%2Fn9W8dU4WNuw1tX9ru87lE%3Dreserved=0
 which seems to match what we're seeing, but our messages are not compressed. 
But sure enough, a test application using the 0.10.1.0 consumer is able to 
consume the topic with no issues. Unfortunately upgrading our legacy 
applications is difficult for other reasons. Any ideas what's happening here?







--




[https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdts-web-images.s3.amazonaws.com%2FImages%2Femail%2Bsignatures%2Fxperi_117.pngdata=02%7C01%7CManoj.Agrawal2%40cognizant.com%7C378f39e5a4054728a5b908d83316403a%7Cde08c40719b9427d9fe8edf254300ca7%7C0%7C0%7C637315519004235818sdata=qTznv4AlWuvbEDERyT2nQwgYb9I%2BA9M9nZzTGQw4IUY%3Dreserved=0]



Tommy Becker

Principal Engineer

Pronouns: he/him/his





O: 919.460.4747

E: thomas.bec...@xperi.com








https://apc01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.xperi.com%2Fdata=02%7C01%7CManoj.Agrawal2%40cognizant.com%7C378f39e5a4054728a5b908d83316403a%7Cde08c40719b9427d9fe8edf254300ca7%7C0%7C0%7C637315519004235818sdata=p6WBnjPNrNi7sImgmufbASzqAm082fitWefOzyY4CZM%3Dreserved=0







This email and any attachments may contain confidential and privileged 
material for the sole use of the intended recipient. Any review, copying, or 
distribution of this email (or any attachments) by others is prohibited. If you 
are not the intended recipient, please contact the sender immediately and 
permanently delete this email and any attachments. No employee or agent of 
Xperi is authorized to conclude any binding agreement on behalf of Xperi by 
email. Binding agreements with Xperi may only be made by a signed written 
agreement.



This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored. This e-mail and any files transmitted with it are for the sole 
use of the intended recipient(s) and may contain confidential and privileged 
information. If you are not the intended recipient(s), please reply to the 
sender and destroy all copies of the original message. Any unauthorized review, 
use, disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: mirror whitelist

2020-07-25 Thread Manoj.Agrawal2
What version of kafka you are using ?

On 7/25/20, 2:22 AM, "Dumitru-Nicolae Marasoui"  
wrote:

[External]


Hello kafka community,

Doing the following cli command to copy messages from one cluster to
another, without any transformation on the binary keys/values of the
messages:

kafka-mirror-maker.sh --consumer.config=config.properties
--producer.config=producer.properties --whitelist="id_u_v1"

I am getting:

ERROR Invalid expression syntax: identity_users_v1
(kafka.tools.MirrorMaker$)

Do you have any suggestions on how to try for that particular topic?
Thank you
Nicolae


--

Dumitru-Nicolae Marasoui

Software Engineer



w kaluza.com 


LinkedIn 

 | Twitter



Kaluza Ltd. registered in England and Wales No. 08785057

VAT No. 100119879

Help save paper - do you need to print this email?


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Kafka - producer writing to a certain broker...

2020-07-25 Thread Manoj.Agrawal2
You should use active-active mirror maker

On 7/25/20, 9:03 AM, "Rajib Deb"  wrote:

[External]


Hi,
I came across the below question and wanted to seek an answer on the same.

If a producer needs to write to a certain broker only, is this possible. 
For example, if the producer is in Europe, it will write to the broker near to 
Europe, if US it will write to broker near to US. But consumers should be able 
to read from both the topics.

Thanks
Rajib


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Consumer Groups Describe is not working

2020-07-08 Thread Manoj.Agrawal2
What error you aare getting . just make sure user have appropriate permission .

Please share the error if you are getting .

On 7/8/20, 3:56 AM, "Ann Pricks"  wrote:

[External]


Hi Team,

Any update on this.


Regards,
Pricks

From: Ann Pricks 
Date: Friday, 3 July 2020 at 4:10 PM
To: "users@kafka.apache.org" 
Subject: Consumer Groups Describe is not working

Hi Team,

Today, In our production cluster, we faced an issue with Kafka (Old offsets 
was getting pulled from spark streaming application) and couldn't debug the 
issue using kafka_consumer_group.sh CLI.

Whenever we execute the below command to list the consumer groups, it is 
working fine. However, whenever we try to describe the consumer group to get to 
know the offset details, it didn't work (Nothing is getting displayed. Just 
blank).

Command to list the consumer group (Working):
/opt/kafka/kafka_2.11-2.0.0/bin/kafka-consumer-groups.sh \
--bootstrap-server broker1:2345,broker2:2345,broker3:2345 \
--list \
--command-config /opt/kafka/kafka_2.11-2.0.0/config/jaas_config.conf

Command to list the consumer group (Not Working):
/opt/kafka/kafka_2.11-2.0.0/bin/kafka-consumer-groups.sh \
--bootstrap-server broker1:2345,broker2:2345,broker3:2345 \
--describe \
--group 
spark-kafka-source-f8e218d5-16d2-4e63-a25c-2f96fabb2809-605351645-driver-0 \
--command-config /opt/kafka/kafka_2.11-2.0.0/config/jaas_config.conf

Jass Config File:
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule 
required \
username="admin" \
password="123@admin";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-512
exclude.internal.topics=false

Kindly help us to monitor our Kafka cluster in case of any issues.

Details:
Kafka Version: 2.0.0
Security:
   sasl.enabled.mechanisms=SCRAM-SHA-512
   sasl.mechanism.inter.broker.protocol=SCRAM-SHA-512
   security.inter.broker.protocol=SASL_PLAINTEXT

Please let us know in case of any other details required from our end.

Regards,
AnnPricksEdmund


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: How to Change number of partitions without Rolling restart?

2020-06-21 Thread Manoj.Agrawal2
Or if you don’t want to automate then , use the excel sheet and generate below 
command for all topic .
Put all 350 statement in script and run it .



On 6/21/20, 9:28 PM, "Peter Bukowinski"  wrote:

[External]


You can’t use a wildcard and must address each topic individually. You can 
automate it with a for loop that takes an array/list of topics as the item to 
iterate over.

-- Peter Bukowinski

> On Jun 21, 2020, at 9:16 PM, sunil chaudhari 
 wrote:
>
> Manoj,
> You mean I have execute this command manually for all 350 Topics which I
> already have?
> Is there any possibility I can use any wild cards?
>
>
>> On Mon, 22 Jun 2020 at 9:28 AM,  wrote:
>>
>> You can use below command to alter to partition
>>
>> ./bin/kafka-topics.sh --alter --zookeeper localhost:2181 --topic my-topic
>> --partitions 6
>>
>> Thanks
>> Manoj
>>
>>
>>
>> On 6/21/20, 7:38 PM, "sunil chaudhari" 
>> wrote:
>>
>>[External]
>>
>>
>>Hi,
>>I already have 350 topics created. Please guide me how can I do that
>> for
>>these many topics?
>>Also I want each new topic to be created with more number partitions
>>automatically than previous number 3, which I had set in properties.
>>
>>Regards,
>>Sunil.
>>
>>On Mon, 22 Jun 2020 at 6:31 AM, Liam Clarke-Hutchinson <
>>liam.cla...@adscale.co.nz> wrote:
>>
>>> Hi Sunil,
>>>
>>> The broker setting num.partitions only applies to automatically
>> created
>>> topics (if that is enabled) at the time of creation. To change
>> partitions
>>> for a topic you need to use kafka-topics.sh to do so for each topic.
>>>
>>> Kind regards,
>>>
>>> Liam Clarke-Hutchinson
>>>
>>> On Mon, Jun 22, 2020 at 3:16 AM sunil chaudhari <
>>> sunilmchaudhar...@gmail.com>
>>> wrote:
>>>
 Hi,
 I want to change number of partitions for all topics.
 How can I change that? Is it server.properties which I need to
>> change?
 Then, in that case I have to restart broker right?

 I checked from confluent control center, there is no option to
>> change
 partitions.

 Please advise.

 Regards,
 Sunil

>>>
>>
>>
>> This e-mail and any files transmitted with it are for the sole use of the
>> intended recipient(s) and may contain confidential and privileged
>> information. If you are not the intended recipient(s), please reply to 
the
>> sender and destroy all copies of the original message. Any unauthorized
>> review, use, disclosure, dissemination, forwarding, printing or copying 
of
>> this email, and/or any action taken in reliance on the contents of this
>> e-mail is strictly prohibited and may be unlawful. Where permitted by
>> applicable law, this e-mail and other e-mail communications sent to and
>> from Cognizant e-mail addresses may be monitored.
>> This e-mail and any files transmitted with it are for the sole use of the
>> intended recipient(s) and may contain confidential and privileged
>> information. If you are not the intended recipient(s), please reply to 
the
>> sender and destroy all copies of the original message. Any unauthorized
>> review, use, disclosure, dissemination, forwarding, printing or copying 
of
>> this email, and/or any action taken in reliance on the contents of this
>> e-mail is strictly prohibited and may be unlawful. Where permitted by
>> applicable law, this e-mail and other e-mail communications sent to and
>> from Cognizant e-mail addresses may be monitored.
>>


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other 

Re: How to Change number of partitions without Rolling restart?

2020-06-21 Thread Manoj.Agrawal2
You can use below command to alter to partition

./bin/kafka-topics.sh --alter --zookeeper localhost:2181 --topic my-topic 
--partitions 6

 Thanks
Manoj



On 6/21/20, 7:38 PM, "sunil chaudhari"  wrote:

[External]


Hi,
I already have 350 topics created. Please guide me how can I do that for
these many topics?
Also I want each new topic to be created with more number partitions
automatically than previous number 3, which I had set in properties.

Regards,
Sunil.

On Mon, 22 Jun 2020 at 6:31 AM, Liam Clarke-Hutchinson <
liam.cla...@adscale.co.nz> wrote:

> Hi Sunil,
>
> The broker setting num.partitions only applies to automatically created
> topics (if that is enabled) at the time of creation. To change partitions
> for a topic you need to use kafka-topics.sh to do so for each topic.
>
> Kind regards,
>
> Liam Clarke-Hutchinson
>
> On Mon, Jun 22, 2020 at 3:16 AM sunil chaudhari <
> sunilmchaudhar...@gmail.com>
> wrote:
>
> > Hi,
> > I want to change number of partitions for all topics.
> > How can I change that? Is it server.properties which I need to change?
> > Then, in that case I have to restart broker right?
> >
> > I checked from confluent control center, there is no option to change
> > partitions.
> >
> > Please advise.
> >
> > Regards,
> > Sunil
> >
>


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Kafka ACL Support

2020-05-13 Thread Manoj.Agrawal2
KAFKA ACL support there for all version .

On 5/13/20, 8:02 AM, "Jadhawar, Ganesh"  wrote:

[External]


Hi Team,

Please let us know kafka ACL authorizer is support from which kafka release.

Thanks,
Ganesh


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: kafka-console-consumer.sh: Port already in use Exception after enable JMX

2020-05-10 Thread Manoj.Agrawal2
You can change jmx-port to any available port - 9992

On 5/10/20, 7:49 PM, "wangl...@geekplus.com.cn"  
wrote:

[External]


Add  JMX_PORT=9988 to kafka-run-class.sh  to enable JMX

After execute bin/kafka-console-consumer.sh there‘s exception:

Error: Exception thrown by the agent : java.rmi.server.ExportException: 
Port already in use: 9988; nested exception is:
java.net.BindException: Address already in use (Bind failed)




wangl...@geekplus.com.cn



This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Change RF factor...

2020-05-08 Thread Manoj.Agrawal2
You can use below command

To generate the json file
./bin/kafka-reassign-partitions.sh --zookeeper zookeeper_hoost:2181   
--generate  --topics-to-move-json-file test.json --broker-list 10,20,30  <-- 
list of broker id



To execute the reassign partition
./bin/kafka-reassign-partitions.sh --zookeeper zookeeper_hoost:2181   --execute 
  --reassignment-json-file  changeTest.json



On 5/8/20, 12:22 PM, "Rajib Deb"  wrote:

[External]


It has three brokers


{"version":1,"partitions":[{"topic":"te_re","partitions":0,"replicas":[1546332950,1546332908]}]}

Thanks

-Original Message-
From: manoj.agraw...@cognizant.com 
Sent: Friday, May 8, 2020 12:18 PM
To: users@kafka.apache.org
Subject: Re: Change RF factor...

[**EXTERNAL EMAIL**]

How many broker you have on this cluster and what is content of -- 
increase-replication-factor.json

On 5/8/20, 12:16 PM, "Rajib Deb"  wrote:

[External]


Hi I have by mistake created a topic with replication factor of 1. I am 
trying to increase the replication, but I get the below error. Can anyone 
please let me know if I am doing anything wrong. The Topic is created with 
single partition(te_re-0).

./kafka-reassign-partitions.sh --zookeeper :2181 --command-config 
kafka.properties --reassignment-json-file increase-replication-factor.json 
-execute


Partitions reassignment failed due to The proposed assignment contains 
non-existent partitions: ListBuffer(test_results-0)
kafka.common.AdminCommandFailedException: The proposed assignment 
contains non-existent partitions: ListBuffer(te_re-0)
at 
kafka.admin.ReassignPartitionsCommand$.parseAndValidate(ReassignPartitionsCommand.scala:341)
at 
kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:209)
at 
kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:205)
at 
kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:65)
at 
kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)

Thanks


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Change RF factor...

2020-05-08 Thread Manoj.Agrawal2
How many broker you have on this cluster and what is content of -- 
increase-replication-factor.json

On 5/8/20, 12:16 PM, "Rajib Deb"  wrote:

[External]


Hi I have by mistake created a topic with replication factor of 1. I am 
trying to increase the replication, but I get the below error. Can anyone 
please let me know if I am doing anything wrong. The Topic is created with 
single partition(te_re-0).

./kafka-reassign-partitions.sh --zookeeper :2181 --command-config 
kafka.properties --reassignment-json-file increase-replication-factor.json 
-execute


Partitions reassignment failed due to The proposed assignment contains 
non-existent partitions: ListBuffer(test_results-0)
kafka.common.AdminCommandFailedException: The proposed assignment contains 
non-existent partitions: ListBuffer(te_re-0)
at 
kafka.admin.ReassignPartitionsCommand$.parseAndValidate(ReassignPartitionsCommand.scala:341)
at 
kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:209)
at 
kafka.admin.ReassignPartitionsCommand$.executeAssignment(ReassignPartitionsCommand.scala:205)
at 
kafka.admin.ReassignPartitionsCommand$.main(ReassignPartitionsCommand.scala:65)
at 
kafka.admin.ReassignPartitionsCommand.main(ReassignPartitionsCommand.scala)

Thanks


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: KafkaConsumer.partitionsFor() Vs KafkaAdminClient.describeTopics()

2020-05-05 Thread Manoj.Agrawal2


Glade , it work for you .

Kafka Admin API run on zookeeper and sometime you don’t have access to 
Zookeeper host /port . I don’t know in your scenario how you are managing 
kafka/Zk cluster but for security purpose , Zookeeper access only limited to 
kafka Cluster .


From: SenthilKumar K 
Date: Tuesday, May 5, 2020 at 12:06 PM
To: "Agrawal, Manoj (Cognizant)" , Senthil kumar 

Cc: "users@kafka.apache.org" , 
"senthilec...@apache.org" 
Subject: Re: KafkaConsumer.partitionsFor() Vs KafkaAdminClient.describeTopics()

[External]
Thanks Manoj. It works for me.

Looks to me the KafkaAdminClient (Singleton instance ) is faster than 
Consumer.partitionsFor() API. In terms of performance which one is good to 
fetch the metadata of a given topic. Thanks!

On Wed, May 6, 2020 at 12:26 AM 
mailto:manoj.agraw...@cognizant.com>> wrote:
I think you can filter list of topic return by  KafkaConsumer.partitionsFor()  
on by using method  type , if this is PartitionInfo.leader()   then include  
those partition  in list .



On 5/5/20, 11:44 AM, "SenthilKumar K" 
mailto:senthilec...@gmail.com>> wrote:

[External]


Hi Team, We are using KafkaConsumer.partitionsFor() API to find the list of
available partitions. After fetching the list of partitions, We use
Consumer.offsetsForTimes() API to find the offsets for a given timestamp.

The API Consumer.partitionsFor() simply returning all partitions including
the partitions which the leader is set to -1. It's causing an issue
(Timeout Exception) when we call Consumer.offsetsForTimes() API.

I'm planning to use adminClient.describeTopics(list).all().get(); And
filter only the partitions which are healthy. Will there be any performance
impact of using AdminClient?

Kafka Version: 2.4.1
Kafka Client: 2.3.0

--Senthil


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored. This e-mail and any files transmitted with it are for the sole 
use of the intended recipient(s) and may contain confidential and privileged 
information. If you are not the intended recipient(s), please reply to the 
sender and destroy all copies of the original message. Any unauthorized review, 
use, disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: KafkaConsumer.partitionsFor() Vs KafkaAdminClient.describeTopics()

2020-05-05 Thread Manoj.Agrawal2
I think you can filter list of topic return by  KafkaConsumer.partitionsFor()  
on by using method  type , if this is PartitionInfo.leader()   then include  
those partition  in list .



On 5/5/20, 11:44 AM, "SenthilKumar K"  wrote:

[External]


Hi Team, We are using KafkaConsumer.partitionsFor() API to find the list of
available partitions. After fetching the list of partitions, We use
Consumer.offsetsForTimes() API to find the offsets for a given timestamp.

The API Consumer.partitionsFor() simply returning all partitions including
the partitions which the leader is set to -1. It's causing an issue
(Timeout Exception) when we call Consumer.offsetsForTimes() API.

I'm planning to use adminClient.describeTopics(list).all().get(); And
filter only the partitions which are healthy. Will there be any performance
impact of using AdminClient?

Kafka Version: 2.4.1
Kafka Client: 2.3.0

--Senthil


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Apache Kafka cluster to cluster

2020-04-29 Thread Manoj.Agrawal2
Is there documentation or example  for mirror maker 2.0  ?

On 4/29/20, 9:04 PM, "Liam Clarke-Hutchinson"  
wrote:

[External]


Hi Blake,

Replicator is, AFAIK, not FOSS - however, Mirror Maker 2.0, which is built
along very similar lines (i.e., on top of Kafka Connect) is, as is Mirror
Maker 1.0.

On Thu, Apr 30, 2020 at 6:51 AM Blake Miller  wrote:

> Oh, and it looks like Confluent has released a newer replacement for
> MirrorMaker called Replicator
>
>
> 
https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.confluent.io%2Fcurrent%2Fmulti-dc-deployments%2Freplicator%2Fmigrate-replicator.htmldata=02%7C01%7CManoj.Agrawal2%40cognizant.com%7Cdd5c2b32b1aa4567421e08d7ecbb9def%7Cde08c40719b9427d9fe8edf254300ca7%7C0%7C0%7C637238162871927627sdata=f6zWwsX%2FEjBxwpiT%2FuaOBB5RSi6HnFcN13S9iukAIqA%3Dreserved=0
>
>
>
> On Wed, Apr 29, 2020 at 6:49 PM Blake Miller 
> wrote:
>
> > Hi Vishnu,
> >
> > Check out MirrorMaker
> >
> 
https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fpages%2Fviewpage.action%3FpageId%3D27846330data=02%7C01%7CManoj.Agrawal2%40cognizant.com%7Cdd5c2b32b1aa4567421e08d7ecbb9def%7Cde08c40719b9427d9fe8edf254300ca7%7C0%7C0%7C637238162871927627sdata=ey5jIPUG%2BVgEGCshvlMlXdmU%2B54E787oWiF6AZ6jj%2FU%3Dreserved=0
> >
> > This can do what you want. Note that the offsets are not copied, nor are
> > the message timestamps.
> >
> > HTH
> >
> >
> > On Wed, Apr 29, 2020 at 6:47 PM vishnu murali <
> vishnumurali9...@gmail.com>
> > wrote:
> >
> >> Hi Guys,
> >>
> >> I am having two separate Kafka cluster running in two independent
> >> zookeeper
> >>
> >> I need to send a set of data from one topic from cluster A to cluster B
> >> with the same topic name  with all data also..
> >>
> >> How can I achieve this
> >>
> >> Done anyone have any idea ??
> >>
> >
>


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: One cluster topic to another cluster topic

2020-04-29 Thread Manoj.Agrawal2
Use  mirror maker .

On 4/29/20, 11:52 AM, "vishnu murali"  wrote:

[External]


Hi Guys,

I am having two separate Kafka cluster running in two independent zookeeper

I need to send a set of data from one topic from cluster A to cluster B
with the same topic name  with all data also..

How can I achieve this

Done anyone have any idea ??


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: Getting NotLeaderForPartitionException for a very long time

2020-04-28 Thread Manoj.Agrawal2
Follower take some time to become the leader in case leader is down . you can 
build retry logic to around this to handle this situation .

On 4/28/20, 1:08 AM, "M.Gopala Krishnan"  wrote:

[External]


Hi,

I have a 3 node kafka cluster (replication-factor : 3), suddenly one of the
node in the cluster was down and i started seeing the
NotLeaderForPartitionException exception in my application logs when
sending the message to one of the topics, however for some of the topics i
am able post and consume messages.

I could see this problem lasting until all the kafka servers are restarted,
after the restart things are all ok.

Now, my question is, why not the new leader not elected for those topics
but keep throwing the same NotLeaderForPartitionException exception and how
to get the new leader election happen for these topics.

*Exception Trace:*

2020-04-11 22:05:21,747 ERROR [pool-15-thread-297]
[KafkaMessageProducer:92] Message send failed:
java.util.concurrent.ExecutionException:
org.apache.kafka.common.errors.NotLeaderForPartitionException: This server
is not the leader for that topic-partition. at

org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:94)
at

org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:64)
at

org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:29)

Regards,
Gopal


This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.