Re: Automation Script : Kafka topic creation

2021-11-06 Thread Ran Lupovich
https://github.com/kafka-ops/julie


Note that I didn't try it myself, only heard about it... and I am not
related to this project

בתאריך שבת, 6 בנוב׳ 2021, 12:35, מאת Kafka Life ‏:

> Dear Kafka experts
>
> does anyone have ready /automated script to create /delete /alter topics in
> different environments?
> taking Configuration parameter as input .
>
> if yes i request you to kindly share it with me .. please
>


Re: Help needed to migrate from one infra to another without downtime

2021-10-20 Thread Ran Lupovich
One thing that comes to my mind after reading your explanation, zk quorum
should be odd number, you stated you have six zookeepers... I would suggest
checking this matter, 3 , 5 , 7 etc...

בתאריך יום ד׳, 20 באוק׳ 2021, 22:00, מאת Rijo Roy
‏:

> Hi,
>
> Hope you are safe and well!
>
> Let me give a brief about my environment:
>
> OS: Ubuntu 18.04
> Kafka Version: Confluent Kafka v5.5.1
> ZooKeeper Version : 3.5.8
> No.of Kafka Brokers: 3
> No. of Zookeeper nodes: 3
>
> I am working on a project where we are aiming to move out from our
> existing infrastructure lets call it A where Kafka and ZooKeeper clusters
> are hosted to a better infrastructure lets call it B but with no or minimal
> downtime. Once the cutover is done, we would like to terminate the old
> infrastructure A.
>
> I was able to use kafka-reassign-partitions.sh as per the steps mentioned
> in https://kafka.apache.org/documentation/#basic_ops_cluster_expansion to
> move the topics-partitions to the Kafka brokers I created in B. Please note
> that I have added 3 zookeeper nodes running in B into the zookeeper cluster
> in A and hence they were following the ZK leader in A.
> I was in the impression that since I had 6 nodes in the ZooKeeper
> ensemble, stopping the A side of ZooKeeper nodes would not cause an issue
> but I was wrong. As soon as I stopped the ZK process on the A nodes, B Zk
> nodes failed to accept any connections from Kafka and I assume it is
> because the leadership of ZK did not transfer to the ZK B nodes and failed
> the quorum resulting in this failure. I had to remove the version-2 folder
> inside the B Zk nodes and starting them 1 by 1 after removing the details
> of ZK A nodes from zookeeper.properties helped me to resolve the failure
> and run the cluster on infrastructure B. I know I failed miserably but this
> was a sandbox where I could afford the downtime but cannot in a production
> setup. I request your help and guidance to make it right. Please help!
>
> Thanks in advance.
>
> Regards,Rijo S Roy
>
>
>


Bye bye ZooKeeper ?

2021-10-03 Thread Ran Lupovich
I am very curious about what the  community apache kafka have in mind for
replacement of zookeeper acls , I failed to find any discussion on that...

So for Confluent there are MDS and Centeral ACL or RBAC ...

What solutions would be there for the open source users, other then staying
with zookeeper


Re: kafka python client to create kafka user/acl info

2021-07-27 Thread Ran Lupovich
https://github.com/dpkp/kafka-python/blob/f19e4238fb47ae2619f18731f0e0e9a3762cfa11/test/test_admin_integration.py#L40

בתאריך יום ג׳, 27 ביולי 2021, 16:25, מאת Calvin Chen ‏:

> Hi all
>
> I know there is confluent-kafka-python AdminClient to create/delete
> topic/config, can AdminClient also be used to create kafka user and acl
> info? could anybody share some link/example on this, thanks!
>
> -Calvin
>
>


Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Ran Lupovich
Another acceptable solution is doing idempotent actions while if you re
read the message again you will check "did I process it already?" Or doing
upsert... and keep it in at least once semantics

בתאריך יום ו׳, 16 ביולי 2021, 19:10, מאת Ran Lupovich ‏<
ranlupov...@gmail.com>:

> You need to do atomic actions with processing and saving the
> partition/offsets , while rebalance or assign or on initial start events
> you read the offset from the outside store, there are documentation and
> examples on the internet, what type of processing are you doing ?
>
> בתאריך יום ו׳, 16 ביולי 2021, 19:01, מאת Pushkar Deole ‏<
> pdeole2...@gmail.com>:
>
>> Chris,
>>
>> I am not sure how this solves the problem scenario that we are
>> experiencing
>> in customer environment: the scenario is:
>> 1. application consumed record and processed it
>> 2. the processed record is produced on destination topic and ack is
>> received
>> 3. Before committing offset back to consumed topic, the application pod
>> crashed or shut down by kubernetes or shut down due to some other issue
>>
>> On Fri, Jul 16, 2021 at 8:57 PM Chris Larsen > >
>> wrote:
>>
>> > It is not possible out of the box, it is something you’ll have to write
>> > yourself. Would the following work?
>> >
>> > Consume -> Produce to primary topic-> get success ack back -> commit the
>> > consume
>> >
>> > Else if ack fails, produce to dead letter, then commit upon success
>> >
>> > Else if dead letter ack fails, exit (and thus don’t commit)
>> >
>> > Does that help? Someone please feel free to slap my hand but seems
>> legit to
>> > me ;)
>> >
>> > Chris
>> >
>> >
>> >
>> > On Fri, Jul 16, 2021 at 10:48 Pushkar Deole 
>> wrote:
>> >
>> > > Thanks Chris for the response!
>> > >
>> > > The current application is quite evolved and currently using
>> > >
>> > > consumer-producer model described above and we need to fix some bugs
>> soon
>> > >
>> > > for a customer. So, moving to kafka streams seems bigger work. That's
>> why
>> > >
>> > > looking at work around if same thing can be achieved with current
>> model
>> > >
>> > > using transactions that span across consumer offset commits and
>> producer
>> > >
>> > > send.
>> > >
>> > >
>> > >
>> > > We have made the producer idempotent and turned on transactions.
>> > >
>> > > However want to make offset commit to consumer and send from producer
>> to
>> > be
>> > >
>> > > atomic? Is that possible?
>> > >
>> > >
>> > >
>> > > On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen
>> > > > > >
>> > >
>> > > wrote:
>> > >
>> > >
>> > >
>> > > > Pushkar, in kafka development for customer consumer/producer you
>> handle
>> > > it.
>> > >
>> > > > However you can ensure the process stops (or sends message to dead
>> > > letter)
>> > >
>> > > > before manually committing the consumer offset. On the produce side
>> you
>> > > can
>> > >
>> > > > turn on idempotence or transactions. But unless you are using
>> Streams,
>> > > you
>> > >
>> > > > chain those together yoursef. Would kafka streams work for the
>> > operation
>> > >
>> > > > you’re looking to do?
>> > >
>> > > >
>> > >
>> > > > Best,
>> > >
>> > > > Chris
>> > >
>> > > >
>> > >
>> > > > On Fri, Jul 16, 2021 at 08:30 Pushkar Deole 
>> > > wrote:
>> > >
>> > > >
>> > >
>> > > > > Hi All,
>> > >
>> > > > >
>> > >
>> > > > >
>> > >
>> > > > >
>> > >
>> > > > > I am using a normal kafka consumer-producer in my microservice,
>> with
>> > a
>> > >
>> > > > >
>> > >
>> > > > > simple model of consume from source topic -> process the record ->
>> > >
>> > > > produce
>> > >
>> > >

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Ran Lupovich
You need to do atomic actions with processing and saving the
partition/offsets , while rebalance or assign or on initial start events
you read the offset from the outside store, there are documentation and
examples on the internet, what type of processing are you doing ?

בתאריך יום ו׳, 16 ביולי 2021, 19:01, מאת Pushkar Deole ‏<
pdeole2...@gmail.com>:

> Chris,
>
> I am not sure how this solves the problem scenario that we are experiencing
> in customer environment: the scenario is:
> 1. application consumed record and processed it
> 2. the processed record is produced on destination topic and ack is
> received
> 3. Before committing offset back to consumed topic, the application pod
> crashed or shut down by kubernetes or shut down due to some other issue
>
> On Fri, Jul 16, 2021 at 8:57 PM Chris Larsen  >
> wrote:
>
> > It is not possible out of the box, it is something you’ll have to write
> > yourself. Would the following work?
> >
> > Consume -> Produce to primary topic-> get success ack back -> commit the
> > consume
> >
> > Else if ack fails, produce to dead letter, then commit upon success
> >
> > Else if dead letter ack fails, exit (and thus don’t commit)
> >
> > Does that help? Someone please feel free to slap my hand but seems legit
> to
> > me ;)
> >
> > Chris
> >
> >
> >
> > On Fri, Jul 16, 2021 at 10:48 Pushkar Deole 
> wrote:
> >
> > > Thanks Chris for the response!
> > >
> > > The current application is quite evolved and currently using
> > >
> > > consumer-producer model described above and we need to fix some bugs
> soon
> > >
> > > for a customer. So, moving to kafka streams seems bigger work. That's
> why
> > >
> > > looking at work around if same thing can be achieved with current model
> > >
> > > using transactions that span across consumer offset commits and
> producer
> > >
> > > send.
> > >
> > >
> > >
> > > We have made the producer idempotent and turned on transactions.
> > >
> > > However want to make offset commit to consumer and send from producer
> to
> > be
> > >
> > > atomic? Is that possible?
> > >
> > >
> > >
> > > On Fri, Jul 16, 2021 at 6:18 PM Chris Larsen
> >  > > >
> > >
> > > wrote:
> > >
> > >
> > >
> > > > Pushkar, in kafka development for customer consumer/producer you
> handle
> > > it.
> > >
> > > > However you can ensure the process stops (or sends message to dead
> > > letter)
> > >
> > > > before manually committing the consumer offset. On the produce side
> you
> > > can
> > >
> > > > turn on idempotence or transactions. But unless you are using
> Streams,
> > > you
> > >
> > > > chain those together yoursef. Would kafka streams work for the
> > operation
> > >
> > > > you’re looking to do?
> > >
> > > >
> > >
> > > > Best,
> > >
> > > > Chris
> > >
> > > >
> > >
> > > > On Fri, Jul 16, 2021 at 08:30 Pushkar Deole 
> > > wrote:
> > >
> > > >
> > >
> > > > > Hi All,
> > >
> > > > >
> > >
> > > > >
> > >
> > > > >
> > >
> > > > > I am using a normal kafka consumer-producer in my microservice,
> with
> > a
> > >
> > > > >
> > >
> > > > > simple model of consume from source topic -> process the record ->
> > >
> > > > produce
> > >
> > > > >
> > >
> > > > > on destination topic.
> > >
> > > > >
> > >
> > > > > I am mainly looking for exactly-once guarantee  wherein the offset
> > > commit
> > >
> > > > >
> > >
> > > > > to consumed topic and produce on destination topic would both
> happen
> > >
> > > > >
> > >
> > > > > atomically or none of them would happen.
> > >
> > > > >
> > >
> > > > >
> > >
> > > > >
> > >
> > > > > In case of failures of service instance, if consumer has consumed,
> > >
> > > > >
> > >
> > > > > processed record and produced on destination topic but offset not
> yet
> > >
> > > > >
> > >
> > > > > committed back to source topic then produce should also not happen
> on
> > >
> > > > >
> > >
> > > > > destination topic.
> > >
> > > > >
> > >
> > > > > Is this behavior i.e. exactly-once, across consumers and producers,
> > >
> > > > >
> > >
> > > > > possible with transactional support in kafka?
> > >
> > > > >
> > >
> > > > > --
> > >
> > > >
> > >
> > > >
> > >
> > > > [image: Confluent] 
> > >
> > > > Chris Larsen
> > >
> > > > Sr Solutions Engineer
> > >
> > > > +1 847 274 3735 <+1+847+274+3735>
> > >
> > > > Follow us: [image: Blog]
> > >
> > > > <
> > >
> > > >
> > >
> >
> https://www.confluent.io/blog?utm_source=footer_medium=email_campaign=ch.email-signature_type.community_content.blog
> > >
> > > > >[image:
> > >
> > > > Twitter] [image: LinkedIn]
> > >
> > > > 
> > >
> > > >
> > >
> > > --
> >
> >
> > [image: Confluent] 
> > Chris Larsen
> > Sr Solutions Engineer
> > +1 847 274 3735 <+1+847+274+3735>
> > Follow us: [image: Blog]
> > <
> >
> https://www.confluent.io/blog?utm_source=footer_medium=email_campaign=ch.email-signature_type.community_content.blog
> > >[image:
> > Twitter] 

Re: Kafka Consumer Retries Failing

2021-07-13 Thread Ran Lupovich
I would suggest you will check you bootstrap definition and
server.properties, somehow it looks for http://ip:9092 , kafka is not using
http protocol, seems something not configured correctly

בתאריך יום ג׳, 13 ביולי 2021, 21:46, מאת Rahul Patwari ‏<
rahulpatwari8...@gmail.com>:

> Hi,
>
> We are facing an issue in our application where Kafka Consumer Retries are
> failing whereas a restart of the application is making the Kafka Consumers
> work as expected again.
>
> Kafka Server version is 2.5.0 - confluent 5.5.0
> Kafka Client Version is 2.4.1 -
>
> {"component":"org.apache.kafka.common.utils.AppInfoParser$AppInfo","message":"Kafka
> version: 2.4.1","method":""}
>
> Occasionally(every 24 hours), we have observed that the Kafka consumption
> rate went down(NOT 0) and the following logs were observed: Generally, the
> consumption rate across all consumers is 1k records/sec. When this issue
> occurred, the consumption rate dropped to < 100 records/sec
>
> org.apache.kafka.common.errors.DisconnectException: null
>
>
> {"time":"2021-07-07T22:13:37,385","severity":"INFO","component":"org.apache.kafka.clients.FetchSessionHandler","message":"[Consumer
> clientId=consumer-XX-3, groupId=XX] Error sending fetch request
> (sessionId=405798138, epoch=5808) to node 8: {}.","method":"handleError"}
>
> org.apache.kafka.common.errors.TimeoutException: Failed
>
>
> {"time":"2021-07-07T22:26:41,379","severity":"INFO","component":"org.apache.kafka.clients.consumer.internals.AbstractCoordinator","message":"[Consumer
> clientId=consumer-XX-3, groupId=XX] Group coordinator x.x.x.x:9092
>  (id: 2147483623 rack: null) is unavailable or
> invalid, will attempt rediscovery","method":"markCoordinatorUnknown"}
>
>
> {"time":"2021-07-07T22:27:10,465","severity":"INFO","component":"org.apache.kafka.clients.consumer.internals.AbstractCoordinator$FindCoordinatorResponseHandler","message":"[Consumer
> clientId=consumer-XX-3, groupId=XX] Discovered group coordinator x
> .x.x.x:9092  (id: 2147483623 rack:
> null)","method":"onSuccess"}
>
> The consumers retried for more than an hour but the above logs are observed
> again.
> The consumers started pulling data after a manual restart of the
> application.
>
> No WARN or ERROR logs were observed in Kafka or Zookeeper during this
> period.
>
> Our observation from this incident is that Kafka Consumer retries could not
> resolve the issue but a manual restart of the application does.
>
> Has anyone faced this issue before? Any pointers are appreciated.
>
> Regards,
> Rahul
>


Re: Kafka ACL: topic with wildcard (*)

2021-07-13 Thread Ran Lupovich
As you can see, using only *, it will create some trash  authorizations
which you might want to delete

בתאריך יום ג׳, 13 ביולי 2021, 16:16, מאת Ran Lupovich ‏<
ranlupov...@gmail.com>:

> Hello  in order to grant READ access to all topics like you did instead
> of just * , which in this case will take the files in current directory and
> as the name of the topic, use "*" with the " before and after.. good luck
>
> בתאריך יום ג׳, 13 ביולי 2021, 15:40, מאת Dhirendra Singh ‏<
> dhirendr...@gmail.com>:
>
>> Hi All,
>> I am trying to add an acl for a user to have read permission on all
>> topics.
>> I executed the following command.
>>
>> kafka-acls.sh --bootstrap-server kafka-dev:9092 --command-config
>> client.properties --add --allow-principal User:kafka-consumer --operation
>> read --topic *
>>
>> The output i got from the above command is...
>>
>> Adding ACLs for resource `ResourcePattern(resourceType=TOPIC,
>> name=client.properties, patternType=LITERAL)`:
>> (principal=User:kafka-consumer, host=*, operation=READ,
>> permissionType=ALLOW)
>>
>> From the output it seems instead of allowing permission on all topic it is
>> allowing on topic "client.properties" which is the name of the config file
>> passed to the command. It is not a topic which exist in kafka.
>>
>> How to fix this issue ?
>> My kafka version is 2.5.0
>>
>


Re: Kafka ACL: topic with wildcard (*)

2021-07-13 Thread Ran Lupovich
Hello  in order to grant READ access to all topics like you did instead
of just * , which in this case will take the files in current directory and
as the name of the topic, use "*" with the " before and after.. good luck

בתאריך יום ג׳, 13 ביולי 2021, 15:40, מאת Dhirendra Singh ‏<
dhirendr...@gmail.com>:

> Hi All,
> I am trying to add an acl for a user to have read permission on all topics.
> I executed the following command.
>
> kafka-acls.sh --bootstrap-server kafka-dev:9092 --command-config
> client.properties --add --allow-principal User:kafka-consumer --operation
> read --topic *
>
> The output i got from the above command is...
>
> Adding ACLs for resource `ResourcePattern(resourceType=TOPIC,
> name=client.properties, patternType=LITERAL)`:
> (principal=User:kafka-consumer, host=*, operation=READ,
> permissionType=ALLOW)
>
> From the output it seems instead of allowing permission on all topic it is
> allowing on topic "client.properties" which is the name of the config file
> passed to the command. It is not a topic which exist in kafka.
>
> How to fix this issue ?
> My kafka version is 2.5.0
>


Re: Kafka ACL: topic with wildcard (*)

2021-07-13 Thread Ran Lupovich
Use "*"

בתאריך יום ג׳, 13 ביולי 2021, 15:40, מאת Dhirendra Singh ‏<
dhirendr...@gmail.com>:

> Hi All,
> I am trying to add an acl for a user to have read permission on all topics.
> I executed the following command.
>
> kafka-acls.sh --bootstrap-server kafka-dev:9092 --command-config
> client.properties --add --allow-principal User:kafka-consumer --operation
> read --topic *
>
> The output i got from the above command is...
>
> Adding ACLs for resource `ResourcePattern(resourceType=TOPIC,
> name=client.properties, patternType=LITERAL)`:
> (principal=User:kafka-consumer, host=*, operation=READ,
> permissionType=ALLOW)
>
> From the output it seems instead of allowing permission on all topic it is
> allowing on topic "client.properties" which is the name of the config file
> passed to the command. It is not a topic which exist in kafka.
>
> How to fix this issue ?
> My kafka version is 2.5.0
>


Re: kafka partition_assignment_strategy

2021-07-01 Thread Ran Lupovich
that's sorts things out

בתאריך יום ה׳, 1 ביולי 2021, 18:24, מאת Marcus Schäfer ‏<
marcus.schae...@gmail.com>:

> Hi,
>
> > Hi Marcus,
> > What Ran meant is that please run the following command to check the
> > result.
> >
> > bin/kafka-topics.sh --describe --topic TOPIC_NAME --bootstrap-server
> > localhost:9092
>
> Ah, from the kafka tooling. Sorry I was confused because the aws
> api name their commands also --describe-something ;)
>
> here we go:
>
> Topic:cb-requestPartitionCount:1ReplicationFactor:2
>  
> Configs:min.insync.replicas=2,message.format.version=2.6-IV0,unclean.leader.election.enable=true
> Topic: cb-request   Partition: 0Leader: 2   Replicas:
> 2,1  Isr: 2,1
>
> ok, I think that explains everything. The topic has not assigned more
> than one partition
>
> Actually it turns out I'm so stupid. When I created the topic via
> "bin/kafka-topics.sh --create" I'm sure I'll gave the --partitions
> option a number and that was probably "1"
>
> Thanks for the hint and sorry if I wasted your time
>
> Regards,
> Marcus
> --
>  Public Key available via: https://keybase.io/marcus_schaefer/key.asc
>  keybase search marcus_schaefer
>  ---
>  Marcus Schäfer Am Unterösch 9
>  Tel: +49 7562 905437   D-88316 Isny / Rohrdorf
>  Germany
>  ---
>


Re: kafka partition_assignment_strategy

2021-07-01 Thread Ran Lupovich
I meant kafka-topics describe command , I want to make sure your topic
really have 20 partitions

בתאריך יום ה׳, 1 ביולי 2021, 14:12, מאת Marcus Schäfer ‏<
marcus.schae...@gmail.com>:

> Hi,
>
> > How many partitions do you have for the topic? Can you issue --describe
> > please and share, thanks
>
> I've assigned 20 partitions:
>
> aws kafka describe-configuration-revision --region eu-central-1 --arn
> "..." --revision 1
> base64 --decode ...
>
> auto.create.topics.enable=false
> default.replication.factor=3
> min.insync.replicas=2
> num.io.threads=8
> num.network.threads=5
> num.partitions=20
> num.replica.fetchers=2
> replica.lag.time.max.ms=3
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> socket.send.buffer.bytes=102400
> unclean.leader.election.enable=true
> zookeeper.session.timeout.ms=18000
>
>
> aws kafka describe-cluster --region eu-central-1 --cluster-arn "..."
>
> {
> "ClusterInfo": {
> "BrokerNodeGroupInfo": {
> "BrokerAZDistribution": "DEFAULT",
> "ClientSubnets": [
> "subnet-XX",
> "subnet-XX"
> ],
> "InstanceType": "kafka.m5.large",
> "SecurityGroups": [
> "sg-XX"
> ],
> "StorageInfo": {
> "EbsStorageInfo": {
> "VolumeSize": 100
> }
> }
> },
> "ClusterArn": "...",
> "ClusterName": "cb-cluster",
> "CreationTime": "2021-06-24T07:00:15.158Z",
> "CurrentBrokerSoftwareInfo": {
> "ConfigurationArn": "...",
> "ConfigurationRevision": 1,
> "KafkaVersion": "2.6.1"
> },
> "CurrentVersion": "K3T4TT2Z381HKD",
> "EncryptionInfo": {
> "EncryptionAtRest": {
> "DataVolumeKMSKeyId": "..."
> },
> "EncryptionInTransit": {
> "ClientBroker": "TLS_PLAINTEXT",
> "InCluster": true
> }
> },
> "EnhancedMonitoring": "DEFAULT",
> "OpenMonitoring": {
> "Prometheus": {
> "JmxExporter": {
> "EnabledInBroker": false
> },
> "NodeExporter": {
> "EnabledInBroker": false
> }
> }
> },
> "NumberOfBrokerNodes": 2,
> "State": "ACTIVE",
> "Tags": {},
> "ZookeeperConnectString": ""
> }
> }
>
> Thanks
>
> Regards,
> Marcus
> --
>  Public Key available via: https://keybase.io/marcus_schaefer/key.asc
>  keybase search marcus_schaefer
>  ---
>  Marcus Schäfer Am Unterösch 9
>  Tel: +49 7562 905437   D-88316 Isny / Rohrdorf
>  Germany
>  ---
>


Re: kafka partition_assignment_strategy

2021-07-01 Thread Ran Lupovich
How many partitions do you have for the topic? Can you issue --describe
please and share, thanks

בתאריך יום ה׳, 1 ביולי 2021, 13:31, מאת Marcus Schäfer ‏<
marcus.schae...@gmail.com>:

> Hi,
>
> > Your first understanding is correct, provided each “consumer” means a
> > “consumer thread”
>
> all right, thanks
>
> > IMO, Second understanding about message distribution is incorrect because
> > there is something called as max poll records for each consumer. Its 500
> by
> > default. And the time between 2 polls is also very small in few
> milliseconds.
> > Thats why this is happening.
> >
> > You may need to try this on a big number of messages so that other
> > partitions get assigned.
>
> Ah ok, so you are saying the partition asignment depends on the load
> of the topic ? This was new to me as I thought kafka distributes the
> messages between the active consumers independent of the amount of
> data. Is there documentation available that explains how this
> correlates ?
>
> If one poll() fetches them all it's clear to me that there is not
> much left to distribute and if the subsequent poll() happens right
> after the former one, I can imagine it stays at the same partition.
> Thanks for pointing this out.
>
> So I changed the code and used:
>
> max_poll_records=1
>
> That actually means one poll() for each message. I added a
> sleep time between the polls of 1sec and started two consumers(read.py)
> with some delay such that there are a number of poll()'s from
> different consumers with different timing.
>
> From this situation I tested producing messages:
>
>20
>200
>2000
>20
>
> There was no case in which messages got distributed between the
> two consumers. It was always one receiving them. If you close
> one of the consumers the other continued receiving the messages.
>
> I must admit I did not wait forever (1sec between polls is long :-))
> but I also think it wouldn't have changed while processing.
>
> > I tried my best to participate in discussion I am not expert though
>
> Thanks much for looking into this. I still think something is
> wrong in either my testing/code or on the cluster. Maybe some
> details on the kafka setup helps:
>
> This is the server setup:
>
> auto.create.topics.enable=false
> default.replication.factor=3
> min.insync.replicas=2
> num.io.threads=8
> num.network.threads=5
> num.partitions=20
> num.replica.fetchers=2
> replica.lag.time.max.ms=3
> socket.receive.buffer.bytes=102400
> socket.request.max.bytes=104857600
> socket.send.buffer.bytes=102400
> unclean.leader.election.enable=true
> zookeeper.session.timeout.ms=18000
>
> This is the kafka version
>
> "CurrentBrokerSoftwareInfo": {
> "KafkaVersion": "2.6.1"
> }
>
>
> Do you see any setting that impacts the dynamic partition assignment ?
>
> I was reading about the api_version exchange and that it can
> impact the availability of features:
>
> (0, 9) enables full group coordination features with automatic
>partition assignment and rebalancing,
> (0, 8, 2) enables kafka-storage offset commits with manual
>partition assignment only,
> (0, 8, 1) enables zookeeper-storage offset commits with manual
>partition assignment only,
> (0, 8, 0) enables basic functionality but requires manual
>partition assignment and offset management.
>
>
> I was not able to find out which api_version the server as setup
> by Amazon MSK is talking, though
>
> Thanks
>
> Regards,
> Marcus
> --
>  Public Key available via: https://keybase.io/marcus_schaefer/key.asc
>  keybase search marcus_schaefer
>  ---
>  Marcus Schäfer Am Unterösch 9
>  Tel: +49 7562 905437   D-88316 Isny / Rohrdorf
>  Germany
>  ---
>


Re: Hello, I wonder release date of 2.7.1 kafka.

2021-06-29 Thread Ran Lupovich
(and yes, it was released on May 10, 2021)

בתאריך יום ג׳, 29 ביוני 2021, 17:49, מאת Ran Lupovich ‏<
ranlupov...@gmail.com>:

> Kafka 2.7.1 fixes 45 issues since the 2.7.0 release. For more information,
> please read the detailed Release Notes
> <https://downloads.apache.org/kafka/2.7.1/RELEASE_NOTES.html>.
>
> https://github.com/apache/kafka/releases/tag/2.7.1
>
> Kafka development usually releases minors to previous releases
>
> Why do you think there is an error in the documents?
>
> בתאריך יום ג׳, 29 ביוני 2021, 17:42, מאת 전예지 ‏:
>
>> Hi , I'm yeji.
>> I'm kafka user.
>> I have a question.
>> I wonder release date of 2.7.1 kafka. This version was released on May
>> 10, 2021 ?
>> I think there is a wrong date on the  web site.
>> Thank you for your help.
>>
>


Re: Hello, I wonder release date of 2.7.1 kafka.

2021-06-29 Thread Ran Lupovich
Kafka 2.7.1 fixes 45 issues since the 2.7.0 release. For more information,
please read the detailed Release Notes
.

https://github.com/apache/kafka/releases/tag/2.7.1

Kafka development usually releases minors to previous releases

Why do you think there is an error in the documents?

בתאריך יום ג׳, 29 ביוני 2021, 17:42, מאת 전예지 ‏:

> Hi , I'm yeji.
> I'm kafka user.
> I have a question.
> I wonder release date of 2.7.1 kafka. This version was released on May 10,
> 2021 ?
> I think there is a wrong date on the  web site.
> Thank you for your help.
>


Re: Certificate request not coming mtls

2021-06-25 Thread Ran Lupovich
You open seperate topics with no reason... I feel you are spamming the
mailing list  your need to share more information to be able to
investigate and help you... is your server.properties defines the SSL
enabled in your port you are trying to connect? You need to setup step by
step according to manauls and it  work for you... it is not just client
change that need to be done , it needs server change to enable the ssl on
that port.

בתאריך יום ו׳, 25 ביוני 2021, 11:29, מאת Anjali Sharma ‏<
sharma.anjali.2...@gmail.com>:

> 1.Was trying for mtls by setting SSL.client.auth=required
> 2. Had imported the trustore, keystore and everything on client side
> 3. Need to consume messages on client which we are not able to see
>
> Can you help with this?
>
>
> On Fri, Jun 25, 2021, 13:54 M. Manna  wrote:
>
> > 1. What is it that you’ve tried ?
> > 2. What config changes have you made?
> > 3. What do you expect to see?
> >
> > On Fri, 25 Jun 2021 at 09:22, Anjali Sharma <
> sharma.anjali.2...@gmail.com>
> > wrote:
> >
> > > Hii All,
> > >
> > >
> > > Can you please help with this?
> > >
> > > While trying for mtls ssl.client.aut=required, server side in
> certificate
> > > request the DN are for some junk certificates which we have not
> deployed
> > on
> > > server
> > >
> >
>


Re: Mtls not working

2021-06-24 Thread Ran Lupovich
Can you share your listeners propeties from server.properties


בתאריך יום ה׳, 24 ביוני 2021, 19:49, מאת Anjali Sharma ‏<
sharma.anjali.2...@gmail.com>:

> But in the pcap I am able to see that it is taking some junk certificates
> from client side
>
> On Thu, Jun 24, 2021, 21:58 Ran Lupovich  wrote:
>
> > Make sure that the date and time on the server is correct (The wrong time
> > will cause the SSL certificate connection to fail).
> >
> > בתאריך יום ה׳, 24 ביוני 2021, 19:18, מאת Anjali Sharma ‏<
> > sharma.anjali.2...@gmail.com>:
> >
> > > openssl s_client -connect 10.54.65.99:28105
> > > socket: Bad file descriptor
> > > connect:errno=9
> > >
> > > This is the output we are getting
> > >
> > >
> > > On Thu, Jun 24, 2021 at 6:04 PM Shilin Wu 
> > > wrote:
> > >
> > > > I think your port may not even be enabled with SSL.
> > > >
> > > > do this
> > > > "openssl s_client -connect :"
> > > > and show the result ?
> > > >
> > > >
> > > >
> > > > [image: Confluent] <https://www.confluent.io>
> > > > Wu Shilin
> > > > Solution Architect
> > > > +6581007012
> > > > Follow us: [image: Blog]
> > > > <
> > > >
> > >
> >
> https://www.confluent.io/blog?utm_source=footer_medium=email_campaign=ch.email-signature_type.community_content.blog
> > > > >[image:
> > > > Twitter] <https://twitter.com/ConfluentInc>[image: LinkedIn]
> > > > <https://www.linkedin.com/company/confluent/>[image: Slack]
> > > > <https://slackpass.io/confluentcommunity>[image: YouTube]
> > > > <https://youtube.com/confluent>
> > > > [image: Kafka Summit] <https://www.kafka-summit.org/>
> > > >
> > > >
> > > > On Thu, Jun 24, 2021 at 8:32 PM Anjali Sharma <
> > > > sharma.anjali.2...@gmail.com>
> > > > wrote:
> > > >
> > > > > This is the error we are getting
> > > > >
> > > > >
> > > > >   [2021-06-22 10:59:45,049] ERROR [Consumer clientId=consumer-1,
> > > > > groupId=test-consumer-group] Connection to node -1 failed
> > > authentication
> > > > > due to: SSL handshake failed
> (org.apache.kafka.clients.NetworkClient)
> > > > > [2021-06-22 10:59:45,051] ERROR Authentication failed: terminating
> > > > consumer
> > > > > process (kafka.tools.ConsoleConsumer$)
> > > > > org.apache.kafka.common.errors.SslAuthenticationException: SSL
> > > handshake
> > > > > failed
> > > > > Caused by: javax.net.ssl.SSLException: Unsupported record version
> > > > > Unknown-211.79
> > > > >
> > > > >
> > > > > On Thu, Jun 24, 2021, 17:59 Shilin Wu 
> > > wrote:
> > > > >
> > > > > > You need to make sure the following one by one... Or you can post
> > the
> > > > > > message of error here so we can see exact error.
> > > > > >
> > > > > >
> > > > > > > > > > > 1. Client trust store need to trust the server cert's
> > > issuer
> > > > > cert
> > > > > > > > (AKA
> > > > > > > > > > the
> > > > > > > > > > > CA cert)
> > > > > > > > > > > 2. The client must have a keystore that can be trusted
> by
> > > > > > server's
> > > > > > > > > trust
> > > > > > > > > > > store.
> > > > > > > > > > > 3. The server needs to be accessed either via FQDN, or
> > one
> > > of
> > > > > the
> > > > > > > SAN
> > > > > > > > > > > address. If you are doing self sign, you can add many
> DNS
> > > > alias
> > > > > > and
> > > > > > > > > even
> > > > > > > > > > ip
> > > > > > > > > > > addresses to the server's cert.
> > > > > > > > > > > 4. Make sure the server cert has extended key usage of
> > > > > > serverAuth,
> > > > > > > > > client
> > > > > > > > > > > cert has extended key usa

Re: Mtls not working

2021-06-24 Thread Ran Lupovich
Make sure that the date and time on the server is correct (The wrong time
will cause the SSL certificate connection to fail).

בתאריך יום ה׳, 24 ביוני 2021, 19:18, מאת Anjali Sharma ‏<
sharma.anjali.2...@gmail.com>:

> openssl s_client -connect 10.54.65.99:28105
> socket: Bad file descriptor
> connect:errno=9
>
> This is the output we are getting
>
>
> On Thu, Jun 24, 2021 at 6:04 PM Shilin Wu 
> wrote:
>
> > I think your port may not even be enabled with SSL.
> >
> > do this
> > "openssl s_client -connect :"
> > and show the result ?
> >
> >
> >
> > [image: Confluent] 
> > Wu Shilin
> > Solution Architect
> > +6581007012
> > Follow us: [image: Blog]
> > <
> >
> https://www.confluent.io/blog?utm_source=footer_medium=email_campaign=ch.email-signature_type.community_content.blog
> > >[image:
> > Twitter] [image: LinkedIn]
> > [image: Slack]
> > [image: YouTube]
> > 
> > [image: Kafka Summit] 
> >
> >
> > On Thu, Jun 24, 2021 at 8:32 PM Anjali Sharma <
> > sharma.anjali.2...@gmail.com>
> > wrote:
> >
> > > This is the error we are getting
> > >
> > >
> > >   [2021-06-22 10:59:45,049] ERROR [Consumer clientId=consumer-1,
> > > groupId=test-consumer-group] Connection to node -1 failed
> authentication
> > > due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
> > > [2021-06-22 10:59:45,051] ERROR Authentication failed: terminating
> > consumer
> > > process (kafka.tools.ConsoleConsumer$)
> > > org.apache.kafka.common.errors.SslAuthenticationException: SSL
> handshake
> > > failed
> > > Caused by: javax.net.ssl.SSLException: Unsupported record version
> > > Unknown-211.79
> > >
> > >
> > > On Thu, Jun 24, 2021, 17:59 Shilin Wu 
> wrote:
> > >
> > > > You need to make sure the following one by one... Or you can post the
> > > > message of error here so we can see exact error.
> > > >
> > > >
> > > > > > > > > 1. Client trust store need to trust the server cert's
> issuer
> > > cert
> > > > > > (AKA
> > > > > > > > the
> > > > > > > > > CA cert)
> > > > > > > > > 2. The client must have a keystore that can be trusted by
> > > > server's
> > > > > > > trust
> > > > > > > > > store.
> > > > > > > > > 3. The server needs to be accessed either via FQDN, or one
> of
> > > the
> > > > > SAN
> > > > > > > > > address. If you are doing self sign, you can add many DNS
> > alias
> > > > and
> > > > > > > even
> > > > > > > > ip
> > > > > > > > > addresses to the server's cert.
> > > > > > > > > 4. Make sure the server cert has extended key usage of
> > > > serverAuth,
> > > > > > > client
> > > > > > > > > cert has extended key usage of clientAuth. Actually you can
> > > have
> > > > > > both -
> > > > > > > > if
> > > > > > > > > you are generating yourself.
> > > >
> > > > [image: Confluent] 
> > > > Wu Shilin
> > > > Solution Architect
> > > > +6581007012
> > > > Follow us: [image: Blog]
> > > > <
> > > >
> > >
> >
> https://www.confluent.io/blog?utm_source=footer_medium=email_campaign=ch.email-signature_type.community_content.blog
> > > > >[image:
> > > > Twitter] [image: LinkedIn]
> > > > [image: Slack]
> > > > [image: YouTube]
> > > > 
> > > > [image: Kafka Summit] 
> > > >
> > > >
> > > > On Thu, Jun 24, 2021 at 8:26 PM Anjali Sharma <
> > > > sharma.anjali.2...@gmail.com>
> > > > wrote:
> > > >
> > > > > Thanks for this but we are trying to do this on command line but
> > > getting
> > > > > this bad certificate error
> > > > >
> > > > > On Thu, Jun 24, 2021, 17:52 Shilin Wu 
> > > wrote:
> > > > >
> > > > > > you may do openssl s_client -connect kafkahost:port to dump the
> > cert.
> > > > > >
> > > > > > See if the cert makes sense.
> > > > > >
> > > > > > To test if your SSL works, you may try use this java program to
> > test
> > > if
> > > > > you
> > > > > > have SSL trust issue - if it connects ok, the cert trust is
> mostly
> > to
> > > > be
> > > > > > okay. (remember to change your host name in code, and jks path in
> > > > command
> > > > > > line options.
> > > > > >
> > > > > >
> > > > > > java -Djavax.net.ssl.trustStore=truststore.jks
> > > > > > -Djavax.net.ssl.trustStorePassword=changeme Test
> > > > > >
> > > > > > import java.net.*;
> > > > > >
> > > > > > import java.io.*;
> > > > > >
> > > > > > import javax.net.ssl.*;
> > > > > >
> > > > > >
> > > > > > /*
> > > > > >
> > > > > >  * This example demostrates how to use a SSLSocket as client to
> > > > > >
> > > > > >  * send a HTTP request and get response from an HTTPS server.
> > > > > >
> > > > > >  * It assumes that the client is not behind a firewall
> > > > > >
> > > > > >  */
> > > > > >
> > > > > >
> > > > > > 

Re: Consumer Group Stuck on "Completing-Rebalance" State

2021-06-21 Thread Ran Lupovich
https://issues.apache.org/jira/plugins/servlet/mobile#issue/KAFKA-12890

Check out this jira ticket

בתאריך יום ב׳, 21 ביוני 2021, 22:15, מאת Tao Huang ‏<
sandy.huang...@gmail.com>:

> Hi There,
>
> I am experiencing intermittent issue when consumer group stuck on
> "Completing-Reblalance" state. When this is happening, client throws error
> as below:
>
> 2021-06-18 13:55:41,086 ERROR io.mylab.adapter.KafkaListener
> [edfKafkaListener:CIO.PandC.CIPG.InternalLoggingMetadataInfo] Exception on
> Kafka listener (InternalLoggingMetadataInfo) - Some partitions are
> unassigned but all consumers are at maximum capacity
> java.lang.IllegalStateException: Some partitions are unassigned but all
> consumers are at maximum capacity
> at
>
> org.apache.kafka.clients.consumer.internals.AbstractStickyAssignor.constrainedAssign(AbstractStickyAssignor.java:248)
> at
>
> org.apache.kafka.clients.consumer.internals.AbstractStickyAssignor.assign(AbstractStickyAssignor.java:81)
> at
>
> org.apache.kafka.clients.consumer.CooperativeStickyAssignor.assign(CooperativeStickyAssignor.java:64)
> at
>
> org.apache.kafka.clients.consumer.internals.AbstractPartitionAssignor.assign(AbstractPartitionAssignor.java:66)
> at
>
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.performAssignment(ConsumerCoordinator.java:589)
> at
>
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.onJoinLeader(AbstractCoordinator.java:686)
> at
>
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.access$1100(AbstractCoordinator.java:111)
> at
>
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:599)
> at
>
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$JoinGroupResponseHandler.handle(AbstractCoordinator.java:562)
> at
>
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1151)
> at
>
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:1126)
> at
>
> org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:206)
> at
>
> org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:169)
> at
>
> org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:129)
> at
>
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.fireCompletion(ConsumerNetworkClient.java:602)
> at
>
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.firePendingCompletedRequests(ConsumerNetworkClient.java:412)
> at
>
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:297)
> at
>
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
> at
>
> org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1296)
> at
>
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1237)
> at
>
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1210)
> at io.mylab.adapter.KafkaListener.run(EdfKafkaListener.java:93)
> at java.lang.Thread.run(Thread.java:748)
>
> The option to exit the state is to stop some members of the consumer group.
>
> Version: 2.6.1
> PARTITION_ASSIGNMENT_STRATEGY: CooperativeStickyAssignor
>
> Would you please advise what would be the condition to trigger such issue?
>
> Thanks!
>


Re: How to avoid storing password in clear text in server.properties file

2021-06-21 Thread Ran Lupovich
Using Confluent Platform you can use feature called Secrets , I am not
familiar with open source solution for this.

https://docs.confluent.io/platform/current/security/secrets.html

בתאריך יום ב׳, 21 ביוני 2021, 16:26, מאת Dhirendra Singh ‏<
dhirendr...@gmail.com>:

> Hi All,
> I am currently storing various passwords like "ssl.keystore.password",
> "ssl.truststore.password", SASL plain user password in cleartext in
> server.properties file.
> is there any way to store the password in encrypted text ?
> i am using kafka version 2.5.0
>


Re: Kafka Ate My Data!

2021-06-17 Thread Ran Lupovich
Having setting as described above will tolerate one broker down without
service outage,

בתאריך יום ו׳, 18 ביוני 2021, 00:42, מאת Ran Lupovich ‏<
ranlupov...@gmail.com>:

> That's why you have 3 brokers in minimum for production,  having
> replication factor set to 3 , min.isr set to 2, having each broker on
> different rack , you could also use mm2 or replicator to copy data to other
> dc...
>
> בתאריך יום ו׳, 18 ביוני 2021, 00:33, מאת Jhanssen Fávaro ‏<
> jhanssenfav...@gmail.com>:
>
>> Thats a disaster recovery simulation, we need to validate a way to avoid
>> that in a disaster case/scenario!! I mean If I have a disaster and the
>> servers got rebooted we need to prevent its kafka weaknes.
>>
>> Regards,
>> Jhanssen Fávaro de Oliveira
>>
>>
>>
>> On Thu, Jun 17, 2021 at 6:30 PM Sunil Unnithan 
>> wrote:
>>
>> > Why would you reboot all three brokers on same week/day?
>> >
>> > On Thu, Jun 17, 2021 at 5:26 PM Jhanssen Fávaro <
>> jhanssenfav...@gmail.com>
>> > wrote:
>> >
>> > > Sunil,
>> > > Business needs... Anyway, if it was 2, we would face the same problem.
>> > For
>> > > example if the partition leader was the last one to be rebooted and
>> then
>> > > got its disk corrupted. The erase would happens the same way.
>> > >
>> > > Regrads,
>> > >
>> > > On 2021/06/17 21:23:40, Sunil Unnithan  wrote:
>> > > > Why isr=all? Why not use min.isr=2 in this case?
>> > > >
>> > > > On Thu, Jun 17, 2021 at 5:11 PM Jhanssen Fávaro <
>> > > jhanssenfav...@gmail.com>
>> > > > wrote:
>> > > >
>> > > > > Basically, if we have 3 brokers and the ISR == all, and in the
>> case
>> > > that a
>> > > > > leader partition broker was the last server that was
>> > > restarted/rebooted,
>> > > > > and during its startup got a disk corruption, all the followers
>> will
>> > > mark
>> > > > > the topic as offline.
>> > > > > So, If the last broker leader that got the corrupted disk starts,
>> It
>> > > will
>> > > > > be back to the partition leaderhip and then erase all the others
>> > > > > followers/brokers in the cluster.
>> > > > >
>> > > > > It should at least "asks" the other 2 brokers if they are not
>> zeroed.
>> > > > > Anyway to avoid this data to be truncate in the followers ?
>> > > > >
>> > > > > Best Regards,
>> > > > > Jhanssen
>> > > > > On 2021/06/17 20:54:50, Jhanssen F��varo <
>> jhanssenfav...@gmail.com>
>> > > > > wrote:
>> > > > > > Hi all, we were testing kafka disaster/recover in our Sites.
>> > > > > >
>> > > > > > Anyway do avoid the scenario in this post ?
>> > > > > >
>> https://blog.softwaremill.com/help-kafka-ate-my-data-ae2e5d3e6576
>> > > > > >
>> > > > > > But, the Unclean Leader exception is not an option in our case.
>> > > > > > FYI..
>> > > > > > We needed to deactivated our systemctl for kafka brokers to
>> avoid a
>> > > > > service startup with a corrupted leader disk.
>> > > > > >
>> > > > > > Best Regards!
>> > > > > >
>> > > > > >
>> > > > >
>> > > >
>> > >
>> >
>>
>


Re: Kafka Ate My Data!

2021-06-17 Thread Ran Lupovich
That's why you have 3 brokers in minimum for production,  having
replication factor set to 3 , min.isr set to 2, having each broker on
different rack , you could also use mm2 or replicator to copy data to other
dc...

בתאריך יום ו׳, 18 ביוני 2021, 00:33, מאת Jhanssen Fávaro ‏<
jhanssenfav...@gmail.com>:

> Thats a disaster recovery simulation, we need to validate a way to avoid
> that in a disaster case/scenario!! I mean If I have a disaster and the
> servers got rebooted we need to prevent its kafka weaknes.
>
> Regards,
> Jhanssen Fávaro de Oliveira
>
>
>
> On Thu, Jun 17, 2021 at 6:30 PM Sunil Unnithan 
> wrote:
>
> > Why would you reboot all three brokers on same week/day?
> >
> > On Thu, Jun 17, 2021 at 5:26 PM Jhanssen Fávaro <
> jhanssenfav...@gmail.com>
> > wrote:
> >
> > > Sunil,
> > > Business needs... Anyway, if it was 2, we would face the same problem.
> > For
> > > example if the partition leader was the last one to be rebooted and
> then
> > > got its disk corrupted. The erase would happens the same way.
> > >
> > > Regrads,
> > >
> > > On 2021/06/17 21:23:40, Sunil Unnithan  wrote:
> > > > Why isr=all? Why not use min.isr=2 in this case?
> > > >
> > > > On Thu, Jun 17, 2021 at 5:11 PM Jhanssen Fávaro <
> > > jhanssenfav...@gmail.com>
> > > > wrote:
> > > >
> > > > > Basically, if we have 3 brokers and the ISR == all, and in the case
> > > that a
> > > > > leader partition broker was the last server that was
> > > restarted/rebooted,
> > > > > and during its startup got a disk corruption, all the followers
> will
> > > mark
> > > > > the topic as offline.
> > > > > So, If the last broker leader that got the corrupted disk starts,
> It
> > > will
> > > > > be back to the partition leaderhip and then erase all the others
> > > > > followers/brokers in the cluster.
> > > > >
> > > > > It should at least "asks" the other 2 brokers if they are not
> zeroed.
> > > > > Anyway to avoid this data to be truncate in the followers ?
> > > > >
> > > > > Best Regards,
> > > > > Jhanssen
> > > > > On 2021/06/17 20:54:50, Jhanssen F��varo  >
> > > > > wrote:
> > > > > > Hi all, we were testing kafka disaster/recover in our Sites.
> > > > > >
> > > > > > Anyway do avoid the scenario in this post ?
> > > > > >
> https://blog.softwaremill.com/help-kafka-ate-my-data-ae2e5d3e6576
> > > > > >
> > > > > > But, the Unclean Leader exception is not an option in our case.
> > > > > > FYI..
> > > > > > We needed to deactivated our systemctl for kafka brokers to
> avoid a
> > > > > service startup with a corrupted leader disk.
> > > > > >
> > > > > > Best Regards!
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: kafka 2 way ssl authentication

2021-06-04 Thread Ran Lupovich
Share your new configs and logs

בתאריך יום ו׳, 4 ביוני 2021, 12:06, מאת Dhirendra Singh ‏<
dhirendr...@gmail.com>:

> I tried the keytool command suggested by you. still getting the same error.
>
> On Fri, Jun 4, 2021 at 10:50 AM Ran Lupovich 
> wrote:
>
> > The default format is jks,
> >
> >
> > use keytool to create a Java KeyStore (JKS) with the certificate and key
> > for use by Kafka. You'll be prompted to create a new password for the
> > resulting file as well as enter the password for the PKCS12 file from the
> > previous step. Hang onto the new JKS password for use in configuration
> > below.
> >
> > $ keytool -importkeystore -srckeystore server.p12 -destkeystore
> > kafka.server.keystore.jks -srcstoretype pkcs12 -alias
> > myserver.internal.net
> >
> > Note: It's safe to ignore the following warning from keytool.
> >
> > The JKS keystore uses a proprietary format. It is recommended to
> > migrate to PKCS12 which is an industry standard format using "keytool
> > -importkeystore -srckeystore server.p12 -destkeystore
> > kafka.server.keystore.jks -srcstoretype pkcs12"
> >
> >
> > בתאריך יום ו׳, 4 ביוני 2021, 07:40, מאת Dhirendra Singh ‏<
> > dhirendr...@gmail.com>:
> >
> > > I am trying to setup 2 way ssl authentication. My requirement is broker
> > > should authenticate only specific clients.
> > > My organization has a CA which issue all certificates in pkcs12 format.
> > > steps i followed are as follows.
> > >
> > > 1. get a certificate for the broker and configured it in the broker
> > > keystore
> > >ssl.keystore.location=/home/kafka/certificate.p12
> > >ssl.keystore.password=x
> > >ssl.client.auth=required
> > > 2. get a certificate for the client and configured it in the client
> > > keystore
> > >ssl.keystore.location=/home/kafka/certificate.p12
> > >ssl.keystore.password=x
> > > 3. extracted the public certificate from the client certificate using
> > > keytool command
> > >keytool -export -file cert -keystore certificate.p12 -alias "12345"
> > > -storetype pkcs12 -storepass x
> > > 4. imported the certificate into broker truststore. broker truststore
> > > contains only the client 12345 certificate.
> > >keytool -keystore truststore.p12 -import -file cert -alias 12345
> > > -storetype pkcs12 -storepass x -noprompt
> > > 5. configured the truststore in the broker.
> > >ssl.truststore.location=/home/kafka/truststore.p12
> > >ssl.truststore.password=x
> > > 6. configured the truststore in client. client truststore contains CA
> > > certificates.
> > >ssl.truststore.location=/etc/pki/java/cacerts
> > >ssl.truststore.password=x
> > >
> > > When i run the broker and client i expect the broker to authenticate
> the
> > > client and establish ssl connection. but instead following error is
> > thrown.
> > > [2021-06-03 23:32:06,864] ERROR [AdminClient clientId=adminclient-1]
> > > Connection to node -1 (abc.com/10.129.140.212:9093) failed
> > authentication
> > > due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
> > > [2021-06-03 23:32:06,866] WARN [AdminClient clientId=adminclient-1]
> > > Metadata update failed due to authentication error
> > > (org.apache.kafka.clients.admin.internals.AdminMetadataManager)
> > > org.apache.kafka.common.errors.SslAuthenticationException: SSL
> handshake
> > > failed
> > > Caused by: javax.net.ssl.SSLProtocolException: Unexpected handshake
> > > message: server_hello
> > >
> > > I tried various things but nothing seems to work. when i replace the
> > broker
> > > truststore with /etc/pki/java/cacerts truststore file which contains
> only
> > > the CA certificate
> > > then it works fine. but it will authenticate any client which has
> > > certificate issued by the CA.
> > >
> > > what could be the issue ?
> > >
> >
>


Re: Kafka 2 way authentication not working

2021-06-04 Thread Ran Lupovich
What do you mean if you can?
It is supported option.
You can set it up - but seems to do it dynamically update is not yet
implemented - but I'll have to look into the kafka code - not going to that
at the moment.
בתאריך יום ו׳, 4 ביוני 2021, 11:27, מאת Anjali Sharma ‏<
sharma.anjali.2...@gmail.com>:

> But according to the documentation provided by you we can configure
> SSL.client.auth right??
>
> Config options:
>
> Listener Configs
>
> listeners
> advertised.listeners
> listener.security.protocol.map
> Common security config
>
> principal.builder.class
> SSL Configs
>
> ssl.protocol
> ssl.provider
> ssl.cipher.suites
> ssl.enabled.protocols
> ssl.truststore.type
> ssl.truststore.location
> ssl.truststore.password
> ssl.keymanager.algorithm
> ssl.trustmanager.algorithm
> ssl.endpoint.identification.algorithm
> ssl.secure.random.implementation
> *ssl.client.auth*
>
> On Fri, Jun 4, 2021, 13:45 Ran Lupovich  wrote:
>
> > All the security configs can be dynamically configured for new listeners.
> > In the initial implementation, only some configs will be dynamically
> > updatable for existing listeners (e.g. SSL keystores). Support for
> updating
> > other security configs dynamically for existing listeners will be added
> > later
> >
> >
> >
> https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=74687608#content/view/74687608
> >
> >
> > Maybe not supported yet?
> >
> >
> >
> > בתאריך יום ו׳, 4 ביוני 2021, 10:49, מאת Ran Lupovich ‏<
> > ranlupov...@gmail.com
> > >:
> >
> > > Thanks for checking... is there a way for you to check if this behavior
> > is
> > > for "already connected clients" and what check only what happens to
> "new
> > > connections"
> > >
> > > בתאריך יום ו׳, 4 ביוני 2021, 10:47, מאת Anjali Sharma ‏<
> > > sharma.anjali.2...@gmail.com>:
> > >
> > >> Neither listener specific nor ssl.client.auth is working dynamically
> > >>
> > >> On Fri, Jun 4, 2021, 13:04 Ran Lupovich 
> wrote:
> > >>
> > >> > And not* to specific listener
> > >> >
> > >> > בתאריך יום ו׳, 4 ביוני 2021, 10:30, מאת Ran Lupovich ‏<
> > >> > ranlupov...@gmail.com
> > >> > >:
> > >> >
> > >> > > According to documentation it is dynamic and should work, though
> it
> > is
> > >> > > "general" ssl.auth of the entire broker setting and to specific
> > >> listener
> > >> > as
> > >> > > you are trying out , but the logic says it should work the same...
> > >> > besides
> > >> > > that I do not have anything smart to suggest, the only
> understanding
> > >> we
> > >> > > need is if specfic listener config is dynamic changeable and when
> it
> > >> take
> > >> > > place? New connections? Do all your client fully discconect and
> > >> reconnect
> > >> > > to that listener?
> > >> > >
> > >> > > בתאריך יום ו׳, 4 ביוני 2021, 10:25, מאת Anjali Sharma ‏<
> > >> > > sharma.anjali.2...@gmail.com>:
> > >> > >
> > >> > >> Yes restarting the Kafka solves the problem but as it is dynamic
> > >> there
> > >> > is
> > >> > >> no need to restart the Kafka right?
> > >> > >>
> > >> > >> On Fri, Jun 4, 2021, 12:13 Ran Lupovich 
> > >> wrote:
> > >> > >>
> > >> > >> > Restarting the broker solves the problem? Do your clients fully
> > >> > >> disconnect
> > >> > >> > and reconnect?
> > >> > >> >
> > >> > >> > בתאריך יום ו׳, 4 ביוני 2021, 09:24, מאת Anjali Sharma ‏<
> > >> > >> > sharma.anjali.2...@gmail.com>:
> > >> > >> >
> > >> > >> > > Hi Ran,
> > >> > >> > >
> > >> > >> > > Thank you so much for the help, but had already gone through
> > the
> > >> > >> > > documentation, but despite doing the same thing it is not
> > >> working ,
> > >> > we
> > >> > >> > are
> > >> > >> > > not getting any client certificate request as such , is there
> > >> > anything
> 

Re: Kafka 2 way authentication not working

2021-06-04 Thread Ran Lupovich
All the security configs can be dynamically configured for new listeners.
In the initial implementation, only some configs will be dynamically
updatable for existing listeners (e.g. SSL keystores). Support for updating
other security configs dynamically for existing listeners will be added
later

https://cwiki.apache.org/confluence/plugins/servlet/mobile?contentId=74687608#content/view/74687608


Maybe not supported yet?



בתאריך יום ו׳, 4 ביוני 2021, 10:49, מאת Ran Lupovich ‏:

> Thanks for checking... is there a way for you to check if this behavior is
> for "already connected clients" and what check only what happens to "new
> connections"
>
> בתאריך יום ו׳, 4 ביוני 2021, 10:47, מאת Anjali Sharma ‏<
> sharma.anjali.2...@gmail.com>:
>
>> Neither listener specific nor ssl.client.auth is working dynamically
>>
>> On Fri, Jun 4, 2021, 13:04 Ran Lupovich  wrote:
>>
>> > And not* to specific listener
>> >
>> > בתאריך יום ו׳, 4 ביוני 2021, 10:30, מאת Ran Lupovich ‏<
>> > ranlupov...@gmail.com
>> > >:
>> >
>> > > According to documentation it is dynamic and should work, though it is
>> > > "general" ssl.auth of the entire broker setting and to specific
>> listener
>> > as
>> > > you are trying out , but the logic says it should work the same...
>> > besides
>> > > that I do not have anything smart to suggest, the only understanding
>> we
>> > > need is if specfic listener config is dynamic changeable and when it
>> take
>> > > place? New connections? Do all your client fully discconect and
>> reconnect
>> > > to that listener?
>> > >
>> > > בתאריך יום ו׳, 4 ביוני 2021, 10:25, מאת Anjali Sharma ‏<
>> > > sharma.anjali.2...@gmail.com>:
>> > >
>> > >> Yes restarting the Kafka solves the problem but as it is dynamic
>> there
>> > is
>> > >> no need to restart the Kafka right?
>> > >>
>> > >> On Fri, Jun 4, 2021, 12:13 Ran Lupovich 
>> wrote:
>> > >>
>> > >> > Restarting the broker solves the problem? Do your clients fully
>> > >> disconnect
>> > >> > and reconnect?
>> > >> >
>> > >> > בתאריך יום ו׳, 4 ביוני 2021, 09:24, מאת Anjali Sharma ‏<
>> > >> > sharma.anjali.2...@gmail.com>:
>> > >> >
>> > >> > > Hi Ran,
>> > >> > >
>> > >> > > Thank you so much for the help, but had already gone through the
>> > >> > > documentation, but despite doing the same thing it is not
>> working ,
>> > we
>> > >> > are
>> > >> > > not getting any client certificate request as such , is there
>> > anything
>> > >> > that
>> > >> > > I am missing in the executing the command or we need to restart
>> the
>> > >> > brokers
>> > >> > > or anything else we need to do?
>> > >> > >
>> > >> > >
>> > >> > > Thanks & Regards
>> > >> > > Anjali
>> > >> > >
>> > >> > > On Fri, Jun 4, 2021 at 11:17 AM Ran Lupovich <
>> ranlupov...@gmail.com
>> > >
>> > >> > > wrote:
>> > >> > >
>> > >> > > > Adding this information that supports your assumptions that it
>> > >> should
>> > >> > be
>> > >> > > > dynamically supportedNotice the update mode -
>> > >> > > >
>> > >> > > > Dynamic Update Mode option in Broker Configurations
>> > >> > > > <
>> > >> > > >
>> > >> > >
>> > >> >
>> > >>
>> >
>> https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#cp-config-brokers
>> > >> > > > >
>> > >> > > > for
>> > >> > > > the update mode of each broker configuration.
>> > >> > > >
>> > >> > > >- read-only: Requires a broker restart for update.
>> > >> > > >- per-broker: May be updated dynamically for each broker.
>> > >> > > >- cluster-wide: May be updated dynamically as a cluster-wide
>> > >> > default.
>> > >> > &

Re: Kafka 2 way authentication not working

2021-06-04 Thread Ran Lupovich
Thanks for checking... is there a way for you to check if this behavior is
for "already connected clients" and what check only what happens to "new
connections"

בתאריך יום ו׳, 4 ביוני 2021, 10:47, מאת Anjali Sharma ‏<
sharma.anjali.2...@gmail.com>:

> Neither listener specific nor ssl.client.auth is working dynamically
>
> On Fri, Jun 4, 2021, 13:04 Ran Lupovich  wrote:
>
> > And not* to specific listener
> >
> > בתאריך יום ו׳, 4 ביוני 2021, 10:30, מאת Ran Lupovich ‏<
> > ranlupov...@gmail.com
> > >:
> >
> > > According to documentation it is dynamic and should work, though it is
> > > "general" ssl.auth of the entire broker setting and to specific
> listener
> > as
> > > you are trying out , but the logic says it should work the same...
> > besides
> > > that I do not have anything smart to suggest, the only understanding we
> > > need is if specfic listener config is dynamic changeable and when it
> take
> > > place? New connections? Do all your client fully discconect and
> reconnect
> > > to that listener?
> > >
> > > בתאריך יום ו׳, 4 ביוני 2021, 10:25, מאת Anjali Sharma ‏<
> > > sharma.anjali.2...@gmail.com>:
> > >
> > >> Yes restarting the Kafka solves the problem but as it is dynamic there
> > is
> > >> no need to restart the Kafka right?
> > >>
> > >> On Fri, Jun 4, 2021, 12:13 Ran Lupovich 
> wrote:
> > >>
> > >> > Restarting the broker solves the problem? Do your clients fully
> > >> disconnect
> > >> > and reconnect?
> > >> >
> > >> > בתאריך יום ו׳, 4 ביוני 2021, 09:24, מאת Anjali Sharma ‏<
> > >> > sharma.anjali.2...@gmail.com>:
> > >> >
> > >> > > Hi Ran,
> > >> > >
> > >> > > Thank you so much for the help, but had already gone through the
> > >> > > documentation, but despite doing the same thing it is not working
> ,
> > we
> > >> > are
> > >> > > not getting any client certificate request as such , is there
> > anything
> > >> > that
> > >> > > I am missing in the executing the command or we need to restart
> the
> > >> > brokers
> > >> > > or anything else we need to do?
> > >> > >
> > >> > >
> > >> > > Thanks & Regards
> > >> > > Anjali
> > >> > >
> > >> > > On Fri, Jun 4, 2021 at 11:17 AM Ran Lupovich <
> ranlupov...@gmail.com
> > >
> > >> > > wrote:
> > >> > >
> > >> > > > Adding this information that supports your assumptions that it
> > >> should
> > >> > be
> > >> > > > dynamically supportedNotice the update mode -
> > >> > > >
> > >> > > > Dynamic Update Mode option in Broker Configurations
> > >> > > > <
> > >> > > >
> > >> > >
> > >> >
> > >>
> >
> https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#cp-config-brokers
> > >> > > > >
> > >> > > > for
> > >> > > > the update mode of each broker configuration.
> > >> > > >
> > >> > > >- read-only: Requires a broker restart for update.
> > >> > > >- per-broker: May be updated dynamically for each broker.
> > >> > > >- cluster-wide: May be updated dynamically as a cluster-wide
> > >> > default.
> > >> > > >May also be updated as a per-broker value for testing
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > > ssl.client.auth
> > >> > > > <
> > >> > > >
> > >> > >
> > >> >
> > >>
> >
> https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#brokerconfigs_ssl.client.auth
> > >> > > > >
> > >> > > >
> > >> > > > Configures kafka broker to request client authentication. The
> > >> following
> > >> > > > settings are common:
> > >> > > >
> > >> > > >- ssl.clien

Re: Kafka 2 way authentication not working

2021-06-04 Thread Ran Lupovich
And not* to specific listener

בתאריך יום ו׳, 4 ביוני 2021, 10:30, מאת Ran Lupovich ‏:

> According to documentation it is dynamic and should work, though it is
> "general" ssl.auth of the entire broker setting and to specific listener as
> you are trying out , but the logic says it should work the same... besides
> that I do not have anything smart to suggest, the only understanding we
> need is if specfic listener config is dynamic changeable and when it take
> place? New connections? Do all your client fully discconect and reconnect
> to that listener?
>
> בתאריך יום ו׳, 4 ביוני 2021, 10:25, מאת Anjali Sharma ‏<
> sharma.anjali.2...@gmail.com>:
>
>> Yes restarting the Kafka solves the problem but as it is dynamic there is
>> no need to restart the Kafka right?
>>
>> On Fri, Jun 4, 2021, 12:13 Ran Lupovich  wrote:
>>
>> > Restarting the broker solves the problem? Do your clients fully
>> disconnect
>> > and reconnect?
>> >
>> > בתאריך יום ו׳, 4 ביוני 2021, 09:24, מאת Anjali Sharma ‏<
>> > sharma.anjali.2...@gmail.com>:
>> >
>> > > Hi Ran,
>> > >
>> > > Thank you so much for the help, but had already gone through the
>> > > documentation, but despite doing the same thing it is not working , we
>> > are
>> > > not getting any client certificate request as such , is there anything
>> > that
>> > > I am missing in the executing the command or we need to restart the
>> > brokers
>> > > or anything else we need to do?
>> > >
>> > >
>> > > Thanks & Regards
>> > > Anjali
>> > >
>> > > On Fri, Jun 4, 2021 at 11:17 AM Ran Lupovich 
>> > > wrote:
>> > >
>> > > > Adding this information that supports your assumptions that it
>> should
>> > be
>> > > > dynamically supportedNotice the update mode -
>> > > >
>> > > > Dynamic Update Mode option in Broker Configurations
>> > > > <
>> > > >
>> > >
>> >
>> https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#cp-config-brokers
>> > > > >
>> > > > for
>> > > > the update mode of each broker configuration.
>> > > >
>> > > >- read-only: Requires a broker restart for update.
>> > > >- per-broker: May be updated dynamically for each broker.
>> > > >- cluster-wide: May be updated dynamically as a cluster-wide
>> > default.
>> > > >May also be updated as a per-broker value for testing
>> > > >
>> > > >
>> > > >
>> > > > ssl.client.auth
>> > > > <
>> > > >
>> > >
>> >
>> https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#brokerconfigs_ssl.client.auth
>> > > > >
>> > > >
>> > > > Configures kafka broker to request client authentication. The
>> following
>> > > > settings are common:
>> > > >
>> > > >- ssl.client.auth=required If set to required client
>> authentication
>> > is
>> > > >required.
>> > > >- ssl.client.auth=requested This means client authentication is
>> > > >optional. unlike required, if this option is set client can
>> choose
>> > not
>> > > > to
>> > > >provide authentication information about itself
>> > > >- ssl.client.auth=none This means client authentication is not
>> > needed.
>> > > >
>> > > > Type: string
>> > > > Default: none
>> > > > Valid Values: [required, requested, none]
>> > > > Importance: medium
>> > > > Update Mode: per-broker
>> > > >
>> > > > בתאריך יום ו׳, 4 ביוני 2021, 08:30, מאת Anjali Sharma ‏<
>> > > > sharma.anjali.2...@gmail.com>:
>> > > >
>> > > > > Dear All,
>> > > > >
>> > > > > When trying to configure mtls without restarting the brokers it is
>> > not
>> > > > > working.
>> > > > > For mutualTLS "ssl.client.auth" should be set to "required". So,
>> if
>> > we
>> > > > are
>> > > > > trying to do the dynamic update using 

Re: Kafka 2 way authentication not working

2021-06-04 Thread Ran Lupovich
According to documentation it is dynamic and should work, though it is
"general" ssl.auth of the entire broker setting and to specific listener as
you are trying out , but the logic says it should work the same... besides
that I do not have anything smart to suggest, the only understanding we
need is if specfic listener config is dynamic changeable and when it take
place? New connections? Do all your client fully discconect and reconnect
to that listener?

בתאריך יום ו׳, 4 ביוני 2021, 10:25, מאת Anjali Sharma ‏<
sharma.anjali.2...@gmail.com>:

> Yes restarting the Kafka solves the problem but as it is dynamic there is
> no need to restart the Kafka right?
>
> On Fri, Jun 4, 2021, 12:13 Ran Lupovich  wrote:
>
> > Restarting the broker solves the problem? Do your clients fully
> disconnect
> > and reconnect?
> >
> > בתאריך יום ו׳, 4 ביוני 2021, 09:24, מאת Anjali Sharma ‏<
> > sharma.anjali.2...@gmail.com>:
> >
> > > Hi Ran,
> > >
> > > Thank you so much for the help, but had already gone through the
> > > documentation, but despite doing the same thing it is not working , we
> > are
> > > not getting any client certificate request as such , is there anything
> > that
> > > I am missing in the executing the command or we need to restart the
> > brokers
> > > or anything else we need to do?
> > >
> > >
> > > Thanks & Regards
> > > Anjali
> > >
> > > On Fri, Jun 4, 2021 at 11:17 AM Ran Lupovich 
> > > wrote:
> > >
> > > > Adding this information that supports your assumptions that it should
> > be
> > > > dynamically supportedNotice the update mode -
> > > >
> > > > Dynamic Update Mode option in Broker Configurations
> > > > <
> > > >
> > >
> >
> https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#cp-config-brokers
> > > > >
> > > > for
> > > > the update mode of each broker configuration.
> > > >
> > > >- read-only: Requires a broker restart for update.
> > > >- per-broker: May be updated dynamically for each broker.
> > > >- cluster-wide: May be updated dynamically as a cluster-wide
> > default.
> > > >May also be updated as a per-broker value for testing
> > > >
> > > >
> > > >
> > > > ssl.client.auth
> > > > <
> > > >
> > >
> >
> https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#brokerconfigs_ssl.client.auth
> > > > >
> > > >
> > > > Configures kafka broker to request client authentication. The
> following
> > > > settings are common:
> > > >
> > > >- ssl.client.auth=required If set to required client
> authentication
> > is
> > > >required.
> > > >- ssl.client.auth=requested This means client authentication is
> > > >optional. unlike required, if this option is set client can choose
> > not
> > > > to
> > > >provide authentication information about itself
> > > >- ssl.client.auth=none This means client authentication is not
> > needed.
> > > >
> > > > Type: string
> > > > Default: none
> > > > Valid Values: [required, requested, none]
> > > > Importance: medium
> > > > Update Mode: per-broker
> > > >
> > > > בתאריך יום ו׳, 4 ביוני 2021, 08:30, מאת Anjali Sharma ‏<
> > > > sharma.anjali.2...@gmail.com>:
> > > >
> > > > > Dear All,
> > > > >
> > > > > When trying to configure mtls without restarting the brokers it is
> > not
> > > > > working.
> > > > > For mutualTLS "ssl.client.auth" should be set to "required". So, if
> > we
> > > > are
> > > > > trying to do the dynamic update using the below command
> > > > >
> > > > > *sh /opt/kafka/bin/kafka-configs.sh --bootstrap-server
> > localhost:28104
> > > > > --entity-type brokers --entity-name 117373 **--alter --add-config
> > > > > listener.name.app.ssl.client.auth=required*
> > > > > *Completed updating config for broker 117373.*
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > *sh /opt/kafka/bin/kafka-configs.sh --bootstrap-server
> > localhost:28104
> > > > > --entity-type brokers --entity-name 117373 --describeDynamic
> configs
> > > for
> > > > > broker 117373 are: listener.name.app.ssl.client.auth=required
> > > > > sensitive=false
> > > > >
> > > >
> > >
> >
> synonyms={DYNAMIC_BROKER_CONFIG:listener.name.app.ssl.client.auth=required,
> > > > > STATIC_BROKER_CONFIG:ssl.client.auth=none,
> > > > > DEFAULT_CONFIG:ssl.client.auth=none}*
> > > > > Dynamic command execution is success but in captured tcpdump(pcap)
> > > > > "Certificate Request" is not sent from Server below enter image
> > > > description
> > > > > here.
> > > > >
> > > > >
> > > > > But if we alter manually and restart Kafka we can see "Certificate
> > > > > Request" from Server in tcpdump.
> > > > >
> > > > > Please help in resolving the dynamic update of altering
> > > > > "ssl.client.auth=Required"
> > > > >
> > > > >
> > > > > Pcap image is attached
> > > > >
> > > > >
> > > >
> > >
> >
>


Re: Kafka 2 way authentication not working

2021-06-04 Thread Ran Lupovich
Restarting the broker solves the problem? Do your clients fully disconnect
and reconnect?

בתאריך יום ו׳, 4 ביוני 2021, 09:24, מאת Anjali Sharma ‏<
sharma.anjali.2...@gmail.com>:

> Hi Ran,
>
> Thank you so much for the help, but had already gone through the
> documentation, but despite doing the same thing it is not working , we are
> not getting any client certificate request as such , is there anything that
> I am missing in the executing the command or we need to restart the brokers
> or anything else we need to do?
>
>
> Thanks & Regards
> Anjali
>
> On Fri, Jun 4, 2021 at 11:17 AM Ran Lupovich 
> wrote:
>
> > Adding this information that supports your assumptions that it should be
> > dynamically supportedNotice the update mode -
> >
> > Dynamic Update Mode option in Broker Configurations
> > <
> >
> https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#cp-config-brokers
> > >
> > for
> > the update mode of each broker configuration.
> >
> >- read-only: Requires a broker restart for update.
> >- per-broker: May be updated dynamically for each broker.
> >- cluster-wide: May be updated dynamically as a cluster-wide default.
> >May also be updated as a per-broker value for testing
> >
> >
> >
> > ssl.client.auth
> > <
> >
> https://docs.confluent.io/platform/current/installation/configuration/broker-configs.html#brokerconfigs_ssl.client.auth
> > >
> >
> > Configures kafka broker to request client authentication. The following
> > settings are common:
> >
> >- ssl.client.auth=required If set to required client authentication is
> >required.
> >- ssl.client.auth=requested This means client authentication is
> >optional. unlike required, if this option is set client can choose not
> > to
> >provide authentication information about itself
> >- ssl.client.auth=none This means client authentication is not needed.
> >
> > Type: string
> > Default: none
> > Valid Values: [required, requested, none]
> > Importance: medium
> > Update Mode: per-broker
> >
> > בתאריך יום ו׳, 4 ביוני 2021, 08:30, מאת Anjali Sharma ‏<
> > sharma.anjali.2...@gmail.com>:
> >
> > > Dear All,
> > >
> > > When trying to configure mtls without restarting the brokers it is not
> > > working.
> > > For mutualTLS "ssl.client.auth" should be set to "required". So, if we
> > are
> > > trying to do the dynamic update using the below command
> > >
> > > *sh /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:28104
> > > --entity-type brokers --entity-name 117373 **--alter --add-config
> > > listener.name.app.ssl.client.auth=required*
> > > *Completed updating config for broker 117373.*
> > >
> > >
> > >
> > >
> > > *sh /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:28104
> > > --entity-type brokers --entity-name 117373 --describeDynamic configs
> for
> > > broker 117373 are: listener.name.app.ssl.client.auth=required
> > > sensitive=false
> > >
> >
> synonyms={DYNAMIC_BROKER_CONFIG:listener.name.app.ssl.client.auth=required,
> > > STATIC_BROKER_CONFIG:ssl.client.auth=none,
> > > DEFAULT_CONFIG:ssl.client.auth=none}*
> > > Dynamic command execution is success but in captured tcpdump(pcap)
> > > "Certificate Request" is not sent from Server below enter image
> > description
> > > here.
> > >
> > >
> > > But if we alter manually and restart Kafka we can see "Certificate
> > > Request" from Server in tcpdump.
> > >
> > > Please help in resolving the dynamic update of altering
> > > "ssl.client.auth=Required"
> > >
> > >
> > > Pcap image is attached
> > >
> > >
> >
>


Re: Kafka 2 way authentication not working

2021-06-03 Thread Ran Lupovich
Adding this information that supports your assumptions that it should be
dynamically supportedNotice the update mode -

Dynamic Update Mode option in Broker Configurations

for
the update mode of each broker configuration.

   - read-only: Requires a broker restart for update.
   - per-broker: May be updated dynamically for each broker.
   - cluster-wide: May be updated dynamically as a cluster-wide default.
   May also be updated as a per-broker value for testing



ssl.client.auth


Configures kafka broker to request client authentication. The following
settings are common:

   - ssl.client.auth=required If set to required client authentication is
   required.
   - ssl.client.auth=requested This means client authentication is
   optional. unlike required, if this option is set client can choose not to
   provide authentication information about itself
   - ssl.client.auth=none This means client authentication is not needed.

Type: string
Default: none
Valid Values: [required, requested, none]
Importance: medium
Update Mode: per-broker

בתאריך יום ו׳, 4 ביוני 2021, 08:30, מאת Anjali Sharma ‏<
sharma.anjali.2...@gmail.com>:

> Dear All,
>
> When trying to configure mtls without restarting the brokers it is not
> working.
> For mutualTLS "ssl.client.auth" should be set to "required". So, if we are
> trying to do the dynamic update using the below command
>
> *sh /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:28104
> --entity-type brokers --entity-name 117373 **--alter --add-config
> listener.name.app.ssl.client.auth=required*
> *Completed updating config for broker 117373.*
>
>
>
>
> *sh /opt/kafka/bin/kafka-configs.sh --bootstrap-server localhost:28104
> --entity-type brokers --entity-name 117373 --describeDynamic configs for
> broker 117373 are: listener.name.app.ssl.client.auth=required
> sensitive=false
> synonyms={DYNAMIC_BROKER_CONFIG:listener.name.app.ssl.client.auth=required,
> STATIC_BROKER_CONFIG:ssl.client.auth=none,
> DEFAULT_CONFIG:ssl.client.auth=none}*
> Dynamic command execution is success but in captured tcpdump(pcap)
> "Certificate Request" is not sent from Server below enter image description
> here.
>
>
> But if we alter manually and restart Kafka we can see "Certificate
> Request" from Server in tcpdump.
>
> Please help in resolving the dynamic update of altering
> "ssl.client.auth=Required"
>
>
> Pcap image is attached
>
>


Re: kafka 2 way ssl authentication

2021-06-03 Thread Ran Lupovich
The default format is jks,


use keytool to create a Java KeyStore (JKS) with the certificate and key
for use by Kafka. You'll be prompted to create a new password for the
resulting file as well as enter the password for the PKCS12 file from the
previous step. Hang onto the new JKS password for use in configuration
below.

$ keytool -importkeystore -srckeystore server.p12 -destkeystore
kafka.server.keystore.jks -srcstoretype pkcs12 -alias
myserver.internal.net

Note: It's safe to ignore the following warning from keytool.

The JKS keystore uses a proprietary format. It is recommended to
migrate to PKCS12 which is an industry standard format using "keytool
-importkeystore -srckeystore server.p12 -destkeystore
kafka.server.keystore.jks -srcstoretype pkcs12"


בתאריך יום ו׳, 4 ביוני 2021, 07:40, מאת Dhirendra Singh ‏<
dhirendr...@gmail.com>:

> I am trying to setup 2 way ssl authentication. My requirement is broker
> should authenticate only specific clients.
> My organization has a CA which issue all certificates in pkcs12 format.
> steps i followed are as follows.
>
> 1. get a certificate for the broker and configured it in the broker
> keystore
>ssl.keystore.location=/home/kafka/certificate.p12
>ssl.keystore.password=x
>ssl.client.auth=required
> 2. get a certificate for the client and configured it in the client
> keystore
>ssl.keystore.location=/home/kafka/certificate.p12
>ssl.keystore.password=x
> 3. extracted the public certificate from the client certificate using
> keytool command
>keytool -export -file cert -keystore certificate.p12 -alias "12345"
> -storetype pkcs12 -storepass x
> 4. imported the certificate into broker truststore. broker truststore
> contains only the client 12345 certificate.
>keytool -keystore truststore.p12 -import -file cert -alias 12345
> -storetype pkcs12 -storepass x -noprompt
> 5. configured the truststore in the broker.
>ssl.truststore.location=/home/kafka/truststore.p12
>ssl.truststore.password=x
> 6. configured the truststore in client. client truststore contains CA
> certificates.
>ssl.truststore.location=/etc/pki/java/cacerts
>ssl.truststore.password=x
>
> When i run the broker and client i expect the broker to authenticate the
> client and establish ssl connection. but instead following error is thrown.
> [2021-06-03 23:32:06,864] ERROR [AdminClient clientId=adminclient-1]
> Connection to node -1 (abc.com/10.129.140.212:9093) failed authentication
> due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
> [2021-06-03 23:32:06,866] WARN [AdminClient clientId=adminclient-1]
> Metadata update failed due to authentication error
> (org.apache.kafka.clients.admin.internals.AdminMetadataManager)
> org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake
> failed
> Caused by: javax.net.ssl.SSLProtocolException: Unexpected handshake
> message: server_hello
>
> I tried various things but nothing seems to work. when i replace the broker
> truststore with /etc/pki/java/cacerts truststore file which contains only
> the CA certificate
> then it works fine. but it will authenticate any client which has
> certificate issued by the CA.
>
> what could be the issue ?
>


Re: In case of Max topic size is reached

2021-06-01 Thread Ran Lupovich
segment.bytes
<https://docs.confluent.io/platform/current/installation/configuration/topic-configs.html#topicconfigs_segment.bytes>

This configuration controls the segment file size for the log. Retention
and cleaning is always done a file at a time so a larger segment size means
fewer files but less granular control over retention
segment.ms
<https://docs.confluent.io/platform/current/installation/configuration/topic-configs.html#topicconfigs_segment.ms>

This configuration controls the period of time after which Kafka will force
the log to roll even if the segment file isn't full to ensure that
retention can delete or compact old data



בתאריך יום ד׳, 2 ביוני 2021, 03:14, מאת sunil chaudhari ‏<
sunilmchaudhar...@gmail.com>:

> And when is that message segment closed? I mean what is the criteria to
> close the message segment?
> Can I change that criteria with configuration?
>
>
> On Tue, 1 Jun 2021 at 11:36 PM, Ran Lupovich 
> wrote:
>
> > They work simultaneously,  topic with cleanup policy  of DELETE , will
> > clean old message older than the retention period and also deletes the
> > oldest messages when retention bytes limit is breached,  notice this
> limit
> > is for each partition in a topic and not total size of the topic, notice
> as
> > well that deletion kicks off only on "closed" message segments
> >
> > בתאריך יום ג׳, 1 ביוני 2021, 20:57, מאת sunil chaudhari ‏<
> > sunilmchaudhar...@gmail.com>:
> >
> > > Hi,
> > > Suppose:
> > > Maximum Topic size is set to 1 GB
> > > Retention hours: 168
> > >  What happens in case  topic size reaches the maximum size before 168
> > > hours.
> > > Will it delete few messages before its expiry though they are eligible
> to
> > > stay for 168 hrs?
> > >
> > >
> > > Regards,
> > > Sunil.
> > >
> >
>


Re: In case of Max topic size is reached

2021-06-01 Thread Ran Lupovich
They work simultaneously,  topic with cleanup policy  of DELETE , will
clean old message older than the retention period and also deletes the
oldest messages when retention bytes limit is breached,  notice this limit
is for each partition in a topic and not total size of the topic, notice as
well that deletion kicks off only on "closed" message segments

בתאריך יום ג׳, 1 ביוני 2021, 20:57, מאת sunil chaudhari ‏<
sunilmchaudhar...@gmail.com>:

> Hi,
> Suppose:
> Maximum Topic size is set to 1 GB
> Retention hours: 168
>  What happens in case  topic size reaches the maximum size before 168
> hours.
> Will it delete few messages before its expiry though they are eligible to
> stay for 168 hrs?
>
>
> Regards,
> Sunil.
>


Re: Reading offset from one consumer group to use for another consumer group.

2021-05-28 Thread Ran Lupovich
One more thought that you could think about, have two consumer groups 1
that starts every hour for you "db consumer" and 2 for near real time , the
2ed should run all the time and populate your "memory db" like Redis and
the TTL could be arranged from Redis mechainsem

בתאריך יום ו׳, 28 במאי 2021, 21:44, מאת Ran Lupovich ‏:

> So I think, You should write to your db the partition and the offset,
> while initing the real time consumer you'd read from database where to set
> the consumer starting point, kind-of the "exactly once" programming
> approach,
>
> בתאריך יום ו׳, 28 במאי 2021, 21:38, מאת Ronald Fenner ‏<
> rfen...@gamecircus.com>:
>
>> That might work if my consumers were in the same process but the db
>> consumer is a python job running under Airflow and the realtime consumer
>> wold be running as a backend service on another server.
>>
>> Also how would I seed the realtime consumer at startup if the consumer
>> isn't running which would could be possible if it hit the end stream,
>>
>> The db consumer is designed to read until no new message is delivered
>> then exit till it's next spawned.
>>
>> Ronald Fenner
>> Network Architect
>> Game Circus LLC.
>>
>> rfen...@gamecircus.com
>>
>> > On May 28, 2021, at 12:05 AM, Ran Lupovich 
>> wrote:
>> >
>> >
>> https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seek(org.apache.kafka.common.TopicPartition,%20long)
>> >
>> > בתאריך יום ו׳, 28 במאי 2021, 08:04, מאת Ran Lupovich ‏<
>> ranlupov...@gmail.com
>> >> :
>> >
>> >> While your DB consumer is running you get the access to the partition
>> >> ${partition} @ offset ${offset}
>> >>
>> >>
>> https://github.com/confluentinc/examples/blob/6.1.1-post/clients/cloud/nodejs/consumer.jswhen
>> >> setting your second consumers for real time just set them tostart from
>> that
>> >> point
>> >>
>> >>
>> >> בתאריך יום ו׳, 28 במאי 2021, 01:51, מאת Ronald Fenner ‏<
>> >> rfen...@gamecircus.com>:
>> >>
>> >>> I'm trying to figure out how to pragmatically read a consumer groups
>> >>> offset for a topic.
>> >>> What I'm trying to do is read the offsets of our DB consumers that run
>> >>> once an hour and batch lad all new messages. I then would have another
>> >>> consumer that monitors the offsets that have been consumed and
>> consume the
>> >>> message not yet loaded storing them in  memory to be able to send
>> them to a
>> >>> viewer. As messages get consumed they then get pruned from the in
>> memory
>> >>> cache.
>> >>>
>> >>> Basically I'm wanting to create window on the messages that haven't
>> been
>> >>> loaded into the db.
>> >>>
>> >>> I've seen ways of getting it from the command line but I'd like to
>> from
>> >>> with in code.
>> >>>
>> >>> Currently I'm using node-rdkafka.
>> >>>
>> >>> I guess as a last resort I could shell the command line for the
>> offsets
>> >>> then parse it and get it that way.
>> >>>
>> >>>
>> >>> Ronald Fenner
>> >>> Network Architect
>> >>> Game Circus LLC.
>> >>>
>> >>> rfen...@gamecircus.com
>> >>>
>> >>>
>>
>>


Re: Reading offset from one consumer group to use for another consumer group.

2021-05-28 Thread Ran Lupovich
So I think, You should write to your db the partition and the offset, while
initing the real time consumer you'd read from database where to set the
consumer starting point, kind-of the "exactly once" programming approach,

בתאריך יום ו׳, 28 במאי 2021, 21:38, מאת Ronald Fenner ‏<
rfen...@gamecircus.com>:

> That might work if my consumers were in the same process but the db
> consumer is a python job running under Airflow and the realtime consumer
> wold be running as a backend service on another server.
>
> Also how would I seed the realtime consumer at startup if the consumer
> isn't running which would could be possible if it hit the end stream,
>
> The db consumer is designed to read until no new message is delivered then
> exit till it's next spawned.
>
> Ronald Fenner
> Network Architect
> Game Circus LLC.
>
> rfen...@gamecircus.com
>
> > On May 28, 2021, at 12:05 AM, Ran Lupovich 
> wrote:
> >
> >
> https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seek(org.apache.kafka.common.TopicPartition,%20long)
> >
> > בתאריך יום ו׳, 28 במאי 2021, 08:04, מאת Ran Lupovich ‏<
> ranlupov...@gmail.com
> >> :
> >
> >> While your DB consumer is running you get the access to the partition
> >> ${partition} @ offset ${offset}
> >>
> >>
> https://github.com/confluentinc/examples/blob/6.1.1-post/clients/cloud/nodejs/consumer.jswhen
> >> setting your second consumers for real time just set them tostart from
> that
> >> point
> >>
> >>
> >> בתאריך יום ו׳, 28 במאי 2021, 01:51, מאת Ronald Fenner ‏<
> >> rfen...@gamecircus.com>:
> >>
> >>> I'm trying to figure out how to pragmatically read a consumer groups
> >>> offset for a topic.
> >>> What I'm trying to do is read the offsets of our DB consumers that run
> >>> once an hour and batch lad all new messages. I then would have another
> >>> consumer that monitors the offsets that have been consumed and consume
> the
> >>> message not yet loaded storing them in  memory to be able to send them
> to a
> >>> viewer. As messages get consumed they then get pruned from the in
> memory
> >>> cache.
> >>>
> >>> Basically I'm wanting to create window on the messages that haven't
> been
> >>> loaded into the db.
> >>>
> >>> I've seen ways of getting it from the command line but I'd like to from
> >>> with in code.
> >>>
> >>> Currently I'm using node-rdkafka.
> >>>
> >>> I guess as a last resort I could shell the command line for the offsets
> >>> then parse it and get it that way.
> >>>
> >>>
> >>> Ronald Fenner
> >>> Network Architect
> >>> Game Circus LLC.
> >>>
> >>> rfen...@gamecircus.com
> >>>
> >>>
>
>


Re: Issue using Https with elasticsearch source connector

2021-05-28 Thread Ran Lupovich
trustStore

[image: Copy to clipboard]
JAVA_OPTS=$JAVA_OPTS
-Djavax.net.ssl.trustStore=/path/to/truststore.jks
-Djavax.net.ssl.trustStoreType=jks
-Djavax.net.ssl.trustStorePassword=changeit

keyStore

[image: Copy to clipboard]
JAVA_OPTS=$JAVA_OPTS -Djavax.net.ssl.keyStore=/path/to/keystore.jks
-Djavax.net.ssl.keyStoreType=jks
-Djavax.net.ssl.keyStorePassword=changeit


Check for setting this in $KAFKA_OPTS
<https://doc.nuxeo.com/nxdoc/trust-store-and-key-store-configuration/#adding-your-certificates-to-the-default-trust-store>

בתאריך יום ו׳, 28 במאי 2021, 15:24, מאת sunil chaudhari ‏<
sunilmchaudhar...@gmail.com>:

> Yeah.
> I am trying to add truststore in java keystore
> Lets see
>
> On Fri, 28 May 2021 at 5:40 PM, Ran Lupovich 
> wrote:
>
> > Anyways you need to remmber it is a java application and you can pass
> many
> > variables that not formally supported by the application as jvm input
> > setting or in the connector OPTS, does not have experience with this
> > specfic source connector did something similar as work arounf for the
> > mongodb sink connector before they fixed the support for ssl, so I do
> > beleive its possible , its matter of guess try and see ,  but i do
> > believe its possible
> >
> > בתאריך יום ו׳, 28 במאי 2021, 15:05, מאת sunil chaudhari ‏<
> > sunilmchaudhar...@gmail.com>:
> >
> > > Hello Ran,
> > > Whatever link you have provided is the supported SINK connector.
> > > It has all settings for SSL.
> > >
> > > The connector I am talking about is the Souce connector and its not
> > > supported by Confluent.
> > > If you see the documentation you will find that there is no setting for
> > SSL
> > > certs.
> > >
> > > https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source
> > >
> > >
> > > Thats where I am stuck.
> > >
> > >
> > > Regards,
> > > Sunil.
> > >
> > > On Fri, 28 May 2021 at 9:34 AM, Ran Lupovich 
> > > wrote:
> > >
> > > >
> > > >
> > >
> >
> name=elasticsearch-sinkconnector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnectortasks.max=1topics=test-elasticsearch-sinkkey.ignore=trueconnection.url=
> > > > https://localhost:9200type.name=kafka-connect
> > > >
> > > >
> > >
> >
> elastic.security.protocol=SSLelastic.https.ssl.keystore.location=/home/directory/elasticsearch-6.6.0/config/certs/keystore.jkselastic.https.ssl.keystore.password=asdfasdfelastic.https.ssl.key.password=asdfasdfelastic.https.ssl.keystore.type=JKSelastic.https.ssl.truststore.location=/home/directory/elasticsearch-6.6.0/config/certs/truststore.jkselastic.https.ssl.truststore.password=asdfasdfelastic.https.ssl.truststore.type=JKSelastic.https.ssl.protocol=TLS
> > > >
> > > >
> > > > בתאריך יום ו׳, 28 במאי 2021, 07:03, מאת Ran Lupovich ‏<
> > > > ranlupov...@gmail.com
> > > > >:
> > > >
> > > > >
> > > >
> > >
> >
> https://docs.confluent.io/kafka-connect-elasticsearch/current/security.html
> > > > >
> > > > > בתאריך יום ו׳, 28 במאי 2021, 07:00, מאת sunil chaudhari ‏<
> > > > > sunilmchaudhar...@gmail.com>:
> > > > >
> > > > >> The configurations doesnt have provision for the truststore. Thats
> > my
> > > > >> concern.
> > > > >>
> > > > >>
> > > > >> On Thu, 27 May 2021 at 10:47 PM, Ran Lupovich <
> > ranlupov...@gmail.com>
> > > > >> wrote:
> > > > >>
> > > > >> > For https connections you need to set truststore configuration
> > > > >> parameters ,
> > > > >> > giving it jks with password , the jks needs the contain the
> > > certficate
> > > > >> of
> > > > >> > CA that is signing your certifcates
> > > > >> >
> > > > >> > בתאריך יום ה׳, 27 במאי 2021, 19:55, מאת sunil chaudhari ‏<
> > > > >> > sunilmchaudhar...@gmail.com>:
> > > > >> >
> > > > >> > > Hi Ran,
> > > > >> > > That problem is solved already.
> > > > >> > > If you read complete thread and see that last problem is about
> > > https
> > > > >> > > connection.
> > > > >> > >
> > > > >> > >
> > > > >> > > On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich <
> > > ranlupov...@gmail.com
> > > > >
> > > > >> > > wrote:
> > > > >> > >
> > > > >> > > > Try setting  es.port = "9200" without quotes?
> > > > >> > > >
> > > > >> > > > בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
> > > > >> > > > sunilmchaudhar...@gmail.com>:
> > > > >> > > >
> > > > >> > > > > Hello team,
> > > > >> > > > > Can anyone help me with this issue?
> > > > >> > > > >
> > > > >> > > > >
> > > > >> > > > >
> > > > >> > > >
> > > > >> > >
> > > > >> >
> > > > >>
> > > >
> > >
> >
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
> > > > >> > > > >
> > > > >> > > > >
> > > > >> > > > > Regards,
> > > > >> > > > > Sunil.
> > > > >> > > > >
> > > > >> > > >
> > > > >> > >
> > > > >> >
> > > > >>
> > > > >
> > > >
> > >
> >
>


Re: Issue using Https with elasticsearch source connector

2021-05-28 Thread Ran Lupovich
Anyways you need to remmber it is a java application and you can pass many
variables that not formally supported by the application as jvm input
setting or in the connector OPTS, does not have experience with this
specfic source connector did something similar as work arounf for the
mongodb sink connector before they fixed the support for ssl, so I do
beleive its possible , its matter of guess try and see ,  but i do
believe its possible

בתאריך יום ו׳, 28 במאי 2021, 15:05, מאת sunil chaudhari ‏<
sunilmchaudhar...@gmail.com>:

> Hello Ran,
> Whatever link you have provided is the supported SINK connector.
> It has all settings for SSL.
>
> The connector I am talking about is the Souce connector and its not
> supported by Confluent.
> If you see the documentation you will find that there is no setting for SSL
> certs.
>
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source
>
>
> Thats where I am stuck.
>
>
> Regards,
> Sunil.
>
> On Fri, 28 May 2021 at 9:34 AM, Ran Lupovich 
> wrote:
>
> >
> >
> name=elasticsearch-sinkconnector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnectortasks.max=1topics=test-elasticsearch-sinkkey.ignore=trueconnection.url=
> > https://localhost:9200type.name=kafka-connect
> >
> >
> elastic.security.protocol=SSLelastic.https.ssl.keystore.location=/home/directory/elasticsearch-6.6.0/config/certs/keystore.jkselastic.https.ssl.keystore.password=asdfasdfelastic.https.ssl.key.password=asdfasdfelastic.https.ssl.keystore.type=JKSelastic.https.ssl.truststore.location=/home/directory/elasticsearch-6.6.0/config/certs/truststore.jkselastic.https.ssl.truststore.password=asdfasdfelastic.https.ssl.truststore.type=JKSelastic.https.ssl.protocol=TLS
> >
> >
> > בתאריך יום ו׳, 28 במאי 2021, 07:03, מאת Ran Lupovich ‏<
> > ranlupov...@gmail.com
> > >:
> >
> > >
> >
> https://docs.confluent.io/kafka-connect-elasticsearch/current/security.html
> > >
> > > בתאריך יום ו׳, 28 במאי 2021, 07:00, מאת sunil chaudhari ‏<
> > > sunilmchaudhar...@gmail.com>:
> > >
> > >> The configurations doesnt have provision for the truststore. Thats my
> > >> concern.
> > >>
> > >>
> > >> On Thu, 27 May 2021 at 10:47 PM, Ran Lupovich 
> > >> wrote:
> > >>
> > >> > For https connections you need to set truststore configuration
> > >> parameters ,
> > >> > giving it jks with password , the jks needs the contain the
> certficate
> > >> of
> > >> > CA that is signing your certifcates
> > >> >
> > >> > בתאריך יום ה׳, 27 במאי 2021, 19:55, מאת sunil chaudhari ‏<
> > >> > sunilmchaudhar...@gmail.com>:
> > >> >
> > >> > > Hi Ran,
> > >> > > That problem is solved already.
> > >> > > If you read complete thread and see that last problem is about
> https
> > >> > > connection.
> > >> > >
> > >> > >
> > >> > > On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich <
> ranlupov...@gmail.com
> > >
> > >> > > wrote:
> > >> > >
> > >> > > > Try setting  es.port = "9200" without quotes?
> > >> > > >
> > >> > > > בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
> > >> > > > sunilmchaudhar...@gmail.com>:
> > >> > > >
> > >> > > > > Hello team,
> > >> > > > > Can anyone help me with this issue?
> > >> > > > >
> > >> > > > >
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> >
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
> > >> > > > >
> > >> > > > >
> > >> > > > > Regards,
> > >> > > > > Sunil.
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> >
>


Re: Reading offset from one consumer group to use for another consumer group.

2021-05-27 Thread Ran Lupovich
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seek(org.apache.kafka.common.TopicPartition,%20long)

בתאריך יום ו׳, 28 במאי 2021, 08:04, מאת Ran Lupovich ‏:

> While your DB consumer is running you get the access to the partition
> ${partition} @ offset ${offset}
>
> https://github.com/confluentinc/examples/blob/6.1.1-post/clients/cloud/nodejs/consumer.jswhen
> setting your second consumers for real time just set them tostart from that
> point
>
>
> בתאריך יום ו׳, 28 במאי 2021, 01:51, מאת Ronald Fenner ‏<
> rfen...@gamecircus.com>:
>
>> I'm trying to figure out how to pragmatically read a consumer groups
>> offset for a topic.
>> What I'm trying to do is read the offsets of our DB consumers that run
>> once an hour and batch lad all new messages. I then would have another
>> consumer that monitors the offsets that have been consumed and consume the
>> message not yet loaded storing them in  memory to be able to send them to a
>> viewer. As messages get consumed they then get pruned from the in memory
>> cache.
>>
>> Basically I'm wanting to create window on the messages that haven't been
>> loaded into the db.
>>
>> I've seen ways of getting it from the command line but I'd like to from
>> with in code.
>>
>> Currently I'm using node-rdkafka.
>>
>> I guess as a last resort I could shell the command line for the offsets
>> then parse it and get it that way.
>>
>>
>> Ronald Fenner
>> Network Architect
>> Game Circus LLC.
>>
>> rfen...@gamecircus.com
>>
>>


Re: Reading offset from one consumer group to use for another consumer group.

2021-05-27 Thread Ran Lupovich
While your DB consumer is running you get the access to the partition
${partition} @ offset ${offset}
https://github.com/confluentinc/examples/blob/6.1.1-post/clients/cloud/nodejs/consumer.jswhen
setting your second consumers for real time just set them tostart from that
point


בתאריך יום ו׳, 28 במאי 2021, 01:51, מאת Ronald Fenner ‏<
rfen...@gamecircus.com>:

> I'm trying to figure out how to pragmatically read a consumer groups
> offset for a topic.
> What I'm trying to do is read the offsets of our DB consumers that run
> once an hour and batch lad all new messages. I then would have another
> consumer that monitors the offsets that have been consumed and consume the
> message not yet loaded storing them in  memory to be able to send them to a
> viewer. As messages get consumed they then get pruned from the in memory
> cache.
>
> Basically I'm wanting to create window on the messages that haven't been
> loaded into the db.
>
> I've seen ways of getting it from the command line but I'd like to from
> with in code.
>
> Currently I'm using node-rdkafka.
>
> I guess as a last resort I could shell the command line for the offsets
> then parse it and get it that way.
>
>
> Ronald Fenner
> Network Architect
> Game Circus LLC.
>
> rfen...@gamecircus.com
>
>


Re: Issue using Https with elasticsearch source connector

2021-05-27 Thread Ran Lupovich
name=elasticsearch-sinkconnector.class=io.confluent.connect.elasticsearch.ElasticsearchSinkConnectortasks.max=1topics=test-elasticsearch-sinkkey.ignore=trueconnection.url=https://localhost:9200type.name=kafka-connect
elastic.security.protocol=SSLelastic.https.ssl.keystore.location=/home/directory/elasticsearch-6.6.0/config/certs/keystore.jkselastic.https.ssl.keystore.password=asdfasdfelastic.https.ssl.key.password=asdfasdfelastic.https.ssl.keystore.type=JKSelastic.https.ssl.truststore.location=/home/directory/elasticsearch-6.6.0/config/certs/truststore.jkselastic.https.ssl.truststore.password=asdfasdfelastic.https.ssl.truststore.type=JKSelastic.https.ssl.protocol=TLS


בתאריך יום ו׳, 28 במאי 2021, 07:03, מאת Ran Lupovich ‏:

> https://docs.confluent.io/kafka-connect-elasticsearch/current/security.html
>
> בתאריך יום ו׳, 28 במאי 2021, 07:00, מאת sunil chaudhari ‏<
> sunilmchaudhar...@gmail.com>:
>
>> The configurations doesnt have provision for the truststore. Thats my
>> concern.
>>
>>
>> On Thu, 27 May 2021 at 10:47 PM, Ran Lupovich 
>> wrote:
>>
>> > For https connections you need to set truststore configuration
>> parameters ,
>> > giving it jks with password , the jks needs the contain the certficate
>> of
>> > CA that is signing your certifcates
>> >
>> > בתאריך יום ה׳, 27 במאי 2021, 19:55, מאת sunil chaudhari ‏<
>> > sunilmchaudhar...@gmail.com>:
>> >
>> > > Hi Ran,
>> > > That problem is solved already.
>> > > If you read complete thread and see that last problem is about https
>> > > connection.
>> > >
>> > >
>> > > On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich 
>> > > wrote:
>> > >
>> > > > Try setting  es.port = "9200" without quotes?
>> > > >
>> > > > בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
>> > > > sunilmchaudhar...@gmail.com>:
>> > > >
>> > > > > Hello team,
>> > > > > Can anyone help me with this issue?
>> > > > >
>> > > > >
>> > > > >
>> > > >
>> > >
>> >
>> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
>> > > > >
>> > > > >
>> > > > > Regards,
>> > > > > Sunil.
>> > > > >
>> > > >
>> > >
>> >
>>
>


Re: Issue using Https with elasticsearch source connector

2021-05-27 Thread Ran Lupovich
https://docs.confluent.io/kafka-connect-elasticsearch/current/security.html

בתאריך יום ו׳, 28 במאי 2021, 07:00, מאת sunil chaudhari ‏<
sunilmchaudhar...@gmail.com>:

> The configurations doesnt have provision for the truststore. Thats my
> concern.
>
>
> On Thu, 27 May 2021 at 10:47 PM, Ran Lupovich 
> wrote:
>
> > For https connections you need to set truststore configuration
> parameters ,
> > giving it jks with password , the jks needs the contain the certficate of
> > CA that is signing your certifcates
> >
> > בתאריך יום ה׳, 27 במאי 2021, 19:55, מאת sunil chaudhari ‏<
> > sunilmchaudhar...@gmail.com>:
> >
> > > Hi Ran,
> > > That problem is solved already.
> > > If you read complete thread and see that last problem is about https
> > > connection.
> > >
> > >
> > > On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich 
> > > wrote:
> > >
> > > > Try setting  es.port = "9200" without quotes?
> > > >
> > > > בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
> > > > sunilmchaudhar...@gmail.com>:
> > > >
> > > > > Hello team,
> > > > > Can anyone help me with this issue?
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
> > > > >
> > > > >
> > > > > Regards,
> > > > > Sunil.
> > > > >
> > > >
> > >
> >
>


Re: Kafka getting down every week due to log file deletion.

2021-05-27 Thread Ran Lupovich
The main purpose of the /*tmp* directory is to temporarily store *files* when
installing an OS or software. If any *files* in the /*tmp* directory have
not been accessed for a while, they will be automatically *deleted* from
the system

בתאריך יום ה׳, 27 במאי 2021, 19:04, מאת Ran Lupovich ‏:

> Seems you log dir is sending your data to tmp folder, if I am bot mistken
> this dir automatically removing files from itself, causing the log deletuon
> procedure of the kafka internal to fail and shutdown broker on file not
> found
>
> בתאריך יום ה׳, 27 במאי 2021, 17:52, מאת Neeraj Gulia ‏<
> neeraj.gu...@opsworld.in>:
>
>> Hi team,
>>
>> Our Kafka is getting down almost once or twice a month due to log file
>> deletion failure.
>>
>>
>> There is single node kafka broker is running in our system and gets down
>> every time it tires to delete the log files as cleanup and fails.
>>
>> Sharing the Error Logs, we need a robust solution for this so that our
>> kafka broker doesn't gets down like this every time.
>>
>> Regards,
>> Neeraj Gulia
>>
>> Caused by: java.io.FileNotFoundException:
>> /tmp/kafka-logs/dokutopic-0/.index (No such file or
>> directory)
>> at java.base/java.io.RandomAccessFile.open0(Native Method)
>> at java.base/java.io.RandomAccessFile.open(RandomAccessFile.java:345)
>> at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:259)
>> at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:214)
>> at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:183)
>> at kafka.log.AbstractIndex.resize(AbstractIndex.scala:176)
>> at
>>
>> kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:242)
>> at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:242)
>> at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:508)
>> at kafka.log.Log.$anonfun$roll$8(Log.scala:1954)
>> at kafka.log.Log.$anonfun$roll$2(Log.scala:1954)
>> at kafka.log.Log.roll(Log.scala:2387)
>> at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1749)
>> at kafka.log.Log.deleteSegments(Log.scala:2387)
>> at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1737)
>> at kafka.log.Log.deleteOldSegments(Log.scala:1806)
>> at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:1074)
>> at
>> kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:1071)
>> at scala.collection.immutable.List.foreach(List.scala:431)
>> at kafka.log.LogManager.cleanupLogs(LogManager.scala:1071)
>> at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:409)
>> at
>> kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
>> at
>>
>> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
>> at
>> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
>> at
>>
>> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>> at
>>
>> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>> at
>>
>> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>> at java.base/java.lang.Thread.run(Thread.java:829)
>> [2021-05-27 09:34:07,972] WARN [ReplicaManager broker=0] Broker 0 stopped
>> fetcher for partitions
>>
>> __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,fliptopic-0,__consumer_offsets-25,webhook-events-0,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,dokutopic-0,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,post_payment_topic-0,__consumer_offsets-18,__consumer_offsets-37,topic-0,events-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,disbursementtopic-0,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40,faspaytopic-0
>> and stopped moving logs for partitions because they are in the failed log
>> directory /tmp/kafka-logs. (kafka.server.ReplicaManager)
>> [2021-05-27 09:34:07,974] WARN Stopping serving logs in dir
>> /tmp/kafka-logs
>> (kafka.log.LogManager)
>> [2021-05-27 09:34:07,983] ERROR Shutdown broker because all log dirs in
>> /tmp/kafka-logs have failed (kafka.log.LogManager)
>>
>


Re: Issue using Https with elasticsearch source connector

2021-05-27 Thread Ran Lupovich
For https connections you need to set truststore configuration parameters ,
giving it jks with password , the jks needs the contain the certficate of
CA that is signing your certifcates

בתאריך יום ה׳, 27 במאי 2021, 19:55, מאת sunil chaudhari ‏<
sunilmchaudhar...@gmail.com>:

> Hi Ran,
> That problem is solved already.
> If you read complete thread and see that last problem is about https
> connection.
>
>
> On Thu, 27 May 2021 at 8:01 PM, Ran Lupovich 
> wrote:
>
> > Try setting  es.port = "9200" without quotes?
> >
> > בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
> > sunilmchaudhar...@gmail.com>:
> >
> > > Hello team,
> > > Can anyone help me with this issue?
> > >
> > >
> > >
> >
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
> > >
> > >
> > > Regards,
> > > Sunil.
> > >
> >
>


Re: Kafka getting down every week due to log file deletion.

2021-05-27 Thread Ran Lupovich
Seems you log dir is sending your data to tmp folder, if I am bot mistken
this dir automatically removing files from itself, causing the log deletuon
procedure of the kafka internal to fail and shutdown broker on file not
found

בתאריך יום ה׳, 27 במאי 2021, 17:52, מאת Neeraj Gulia ‏<
neeraj.gu...@opsworld.in>:

> Hi team,
>
> Our Kafka is getting down almost once or twice a month due to log file
> deletion failure.
>
>
> There is single node kafka broker is running in our system and gets down
> every time it tires to delete the log files as cleanup and fails.
>
> Sharing the Error Logs, we need a robust solution for this so that our
> kafka broker doesn't gets down like this every time.
>
> Regards,
> Neeraj Gulia
>
> Caused by: java.io.FileNotFoundException:
> /tmp/kafka-logs/dokutopic-0/.index (No such file or
> directory)
> at java.base/java.io.RandomAccessFile.open0(Native Method)
> at java.base/java.io.RandomAccessFile.open(RandomAccessFile.java:345)
> at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:259)
> at java.base/java.io.RandomAccessFile.(RandomAccessFile.java:214)
> at kafka.log.AbstractIndex.$anonfun$resize$1(AbstractIndex.scala:183)
> at kafka.log.AbstractIndex.resize(AbstractIndex.scala:176)
> at
> kafka.log.AbstractIndex.$anonfun$trimToValidSize$1(AbstractIndex.scala:242)
> at kafka.log.AbstractIndex.trimToValidSize(AbstractIndex.scala:242)
> at kafka.log.LogSegment.onBecomeInactiveSegment(LogSegment.scala:508)
> at kafka.log.Log.$anonfun$roll$8(Log.scala:1954)
> at kafka.log.Log.$anonfun$roll$2(Log.scala:1954)
> at kafka.log.Log.roll(Log.scala:2387)
> at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1749)
> at kafka.log.Log.deleteSegments(Log.scala:2387)
> at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1737)
> at kafka.log.Log.deleteOldSegments(Log.scala:1806)
> at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:1074)
> at
> kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:1071)
> at scala.collection.immutable.List.foreach(List.scala:431)
> at kafka.log.LogManager.cleanupLogs(LogManager.scala:1071)
> at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:409)
> at kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:114)
> at
>
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at
> java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
> at
>
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:829)
> [2021-05-27 09:34:07,972] WARN [ReplicaManager broker=0] Broker 0 stopped
> fetcher for partitions
>
> __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,fliptopic-0,__consumer_offsets-25,webhook-events-0,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,dokutopic-0,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,post_payment_topic-0,__consumer_offsets-18,__consumer_offsets-37,topic-0,events-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,disbursementtopic-0,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40,faspaytopic-0
> and stopped moving logs for partitions because they are in the failed log
> directory /tmp/kafka-logs. (kafka.server.ReplicaManager)
> [2021-05-27 09:34:07,974] WARN Stopping serving logs in dir /tmp/kafka-logs
> (kafka.log.LogManager)
> [2021-05-27 09:34:07,983] ERROR Shutdown broker because all log dirs in
> /tmp/kafka-logs have failed (kafka.log.LogManager)
>


Re: Issue using Https with elasticsearch source connector

2021-05-27 Thread Ran Lupovich
Try setting  es.port = "9200" without quotes?

בתאריך יום ה׳, 27 במאי 2021, 04:21, מאת sunil chaudhari ‏<
sunilmchaudhar...@gmail.com>:

> Hello team,
> Can anyone help me with this issue?
>
>
> https://github.com/DarioBalinzo/kafka-connect-elasticsearch-source/issues/44
>
>
> Regards,
> Sunil.
>


Re: Modify kafka-connect api context path

2021-05-26 Thread Ran Lupovich
--server.servlet.context-path="/kafdrop"
Something like this ?
https://github.com/obsidiandynamics/kafdrop/issues/9


בתאריך יום ד׳, 26 במאי 2021, 23:44, מאת Fernando Moraes ‏<
fernandosdemor...@gmail.com>:

> Hello, I would like to know if it is possible to modify via config
> properties the kafka-connect context path. I have a scenario where the
> proxy redirects a request to a connect worker using a context path.
>
> I've already looked at the source code here, and it doesn't really seem to
> have a point for configuration:
> (
>
> https://github.com/apache/kafka/blob/trunk/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/rest/RestServer.java#L264
> )
>


Re: Weird behavior of topic retention - some are cleaned up too often, some are not at all

2021-05-25 Thread Ran Lupovich
Sorry I did not see all the info at first, what do you mean by topic
getting cleaned, you have setting to check retention every 5 minutes, the
data that getting "cleaned" is the older data which is 30 days old... am I
missing something?

בתאריך יום ג׳, 25 במאי 2021, 23:04, מאת Ran Lupovich ‏:

> By the segment size you are "delete" after 1 giga bytes is full , per
> partition,  you need to remmber the retention is done when segments closed
> , per partition
>
> בתאריך יום ג׳, 25 במאי 2021, 22:59, מאת Ran Lupovich ‏<
> ranlupov...@gmail.com>:
>
>> Have you checked the segment size? Did you decribe the topic
>> configuration?maybe you created it with some settings you dont remember
>>
>> בתאריך יום ג׳, 25 במאי 2021, 19:51, מאת Marina Popova
>> ‏:
>>
>>>
>>> Any idea what is wrong here? I have restarted Kafka brokers a few times,
>>> and all other Confluent services like KSQL - but I see exactly the same
>>> behavior - one topic gets its logs cleaned up every 5 minutes, while the
>>> other one - does not get cleaned up at all 
>>>
>>>
>>> Is there anything else I could check, apart from what I already did -
>>> see the post below - to troubleshoot this?
>>>
>>>
>>> thank you!
>>> Marina
>>>
>>> Sent with ProtonMail Secure Email.
>>>
>>> ‐‐‐ Original Message ‐‐‐
>>> On Thursday, May 20, 2021 2:10 PM, Marina Popova 
>>> 
>>> wrote:
>>>
>>> > Hi, I have posted this question on SO:
>>> >
>>> https://stackoverflow.com/questions/67625641/kafka-segments-are-deleted-too-often-or-not-at-all
>>> > but wanted to re-post here as well in case someone spots the issue
>>> right away 
>>> >
>>> > Thank you for your help!
>>> >
>>> > > > > > >
>>> >
>>> > We have two topics on our Kafka cluster that exhibit weird (wrong)
>>> behavior related to retention configuration.
>>> >
>>> > One topic, tracking.ap.client.traffic, has retention set explicitly to
>>> "retention.ms=1440" (4 hrs) - but it is not cleaned up , grows in
>>> size, and caused 2 out of 3 kafka brokers to run out of disk space.
>>> (details about config and log below)
>>> >
>>> > Second topic, tracking.ap.client.traffic.keyed, is created in KSQL as
>>> a stream topic:
>>> >
>>> > CREATE STREAM AP_CLIENT_TRAFFIC_KEYED
>>> > WITH (KAFKA_TOPIC='tracking.ap.client.traffic.keyed',
>>> TIMESTAMP='ACTIVITY_DATE', VALUE_FORMAT='JSON', PARTITIONS=6)
>>> > AS SELECT
>>> > ...
>>> >
>>> >
>>> >
>>> > its retention is set to the default broker value, which is 720 hrs :
>>> >
>>> > cat /etc/kafka/server.properties | grep retention
>>> > log.retention.hours=720
>>> > # A size-based retention policy for logs. Segments are pruned from
>>> the log unless the remaining
>>> > # segments drop below log.retention.bytes. Functions independently
>>> of log.retention.hours.
>>> > #log.retention.bytes=1073741824
>>> > # to the retention policies
>>> > log.retention.check.interval.ms=30
>>> >
>>> >
>>> > This topic, though, gets cleaned up every 5 min or so - according to
>>> the logs
>>> > The log entry says the segment is marked for deletion "due to
>>> retention time 259200ms breach (kafka.log.Log)" - but how can that be
>>> true if this is happening every 5 min??
>>> >
>>> > No size-based retention is set for any of the topics.
>>> >
>>> > Two questions:
>>> >
>>> > 1.  why is the first topic not being cleaned p?
>>> > 2.  why is the second topic being cleaned up so often?
>>> >
>>> > Below are the details about logs and full config of both topics:
>>> >
>>> > log entries for tracking.ap.client.traffic.keyed-2 topic/partition
>>> - show that this partition is getting cleaned too often:
>>> >
>>> >
>>> > [2021-05-19 21:35:05,822] INFO [Log
>>> partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data]
>>> Incrementing log start offset to 11755700 (kafka.log.Log)
>>> > [2021-05-19 21:36:05,822] INFO [Log
>>> partition=tracking.ap.client.traffic.keyed-2, dir=/

Re: Weird behavior of topic retention - some are cleaned up too often, some are not at all

2021-05-25 Thread Ran Lupovich
By the segment size you are "delete" after 1 giga bytes is full , per
partition,  you need to remmber the retention is done when segments closed
, per partition

בתאריך יום ג׳, 25 במאי 2021, 22:59, מאת Ran Lupovich ‏:

> Have you checked the segment size? Did you decribe the topic
> configuration?maybe you created it with some settings you dont remember
>
> בתאריך יום ג׳, 25 במאי 2021, 19:51, מאת Marina Popova
> ‏:
>
>>
>> Any idea what is wrong here? I have restarted Kafka brokers a few times,
>> and all other Confluent services like KSQL - but I see exactly the same
>> behavior - one topic gets its logs cleaned up every 5 minutes, while the
>> other one - does not get cleaned up at all 
>>
>>
>> Is there anything else I could check, apart from what I already did - see
>> the post below - to troubleshoot this?
>>
>>
>> thank you!
>> Marina
>>
>> Sent with ProtonMail Secure Email.
>>
>> ‐‐‐ Original Message ‐‐‐
>> On Thursday, May 20, 2021 2:10 PM, Marina Popova 
>> 
>> wrote:
>>
>> > Hi, I have posted this question on SO:
>> >
>> https://stackoverflow.com/questions/67625641/kafka-segments-are-deleted-too-often-or-not-at-all
>> > but wanted to re-post here as well in case someone spots the issue
>> right away 
>> >
>> > Thank you for your help!
>> >
>> > > > > > >
>> >
>> > We have two topics on our Kafka cluster that exhibit weird (wrong)
>> behavior related to retention configuration.
>> >
>> > One topic, tracking.ap.client.traffic, has retention set explicitly to "
>> retention.ms=1440" (4 hrs) - but it is not cleaned up , grows in
>> size, and caused 2 out of 3 kafka brokers to run out of disk space.
>> (details about config and log below)
>> >
>> > Second topic, tracking.ap.client.traffic.keyed, is created in KSQL as a
>> stream topic:
>> >
>> > CREATE STREAM AP_CLIENT_TRAFFIC_KEYED
>> > WITH (KAFKA_TOPIC='tracking.ap.client.traffic.keyed',
>> TIMESTAMP='ACTIVITY_DATE', VALUE_FORMAT='JSON', PARTITIONS=6)
>> > AS SELECT
>> > ...
>> >
>> >
>> >
>> > its retention is set to the default broker value, which is 720 hrs :
>> >
>> > cat /etc/kafka/server.properties | grep retention
>> > log.retention.hours=720
>> > # A size-based retention policy for logs. Segments are pruned from
>> the log unless the remaining
>> > # segments drop below log.retention.bytes. Functions independently
>> of log.retention.hours.
>> > #log.retention.bytes=1073741824
>> > # to the retention policies
>> > log.retention.check.interval.ms=30
>> >
>> >
>> > This topic, though, gets cleaned up every 5 min or so - according to
>> the logs
>> > The log entry says the segment is marked for deletion "due to retention
>> time 259200ms breach (kafka.log.Log)" - but how can that be true if
>> this is happening every 5 min??
>> >
>> > No size-based retention is set for any of the topics.
>> >
>> > Two questions:
>> >
>> > 1.  why is the first topic not being cleaned p?
>> > 2.  why is the second topic being cleaned up so often?
>> >
>> > Below are the details about logs and full config of both topics:
>> >
>> > log entries for tracking.ap.client.traffic.keyed-2 topic/partition
>> - show that this partition is getting cleaned too often:
>> >
>> >
>> > [2021-05-19 21:35:05,822] INFO [Log
>> partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data]
>> Incrementing log start offset to 11755700 (kafka.log.Log)
>> > [2021-05-19 21:36:05,822] INFO [Log
>> partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data]
>> Deleting segment 11753910 (kafka.log.Log)
>> > [2021-05-19 21:36:05,825] INFO Deleted log
>> /apps/kafka-data/tracking.ap.client.traffic.keyed-2/11753910.log.deleted.
>> (kafka.log.LogSegment)
>> > [2021-05-19 21:36:05,827] INFO Deleted offset index
>> /apps/kafka-data/tracking.ap.client.traffic.keyed-2/11753910.index.deleted.
>> (kafka.log.LogSegment)
>> > [2021-05-19 21:36:05,829] INFO Deleted time index
>> /apps/kafka-data/tracking.ap.client.traffic.keyed-2/11753910.timeindex.deleted.
>> (kafka.log.LogSegment)
>> > [2021-05-19 21:40:05,838] INFO [Log
>> partition=track

Re: Weird behavior of topic retention - some are cleaned up too often, some are not at all

2021-05-25 Thread Ran Lupovich
Have you checked the segment size? Did you decribe the topic
configuration?maybe you created it with some settings you dont remember

בתאריך יום ג׳, 25 במאי 2021, 19:51, מאת Marina Popova
‏:

>
> Any idea what is wrong here? I have restarted Kafka brokers a few times,
> and all other Confluent services like KSQL - but I see exactly the same
> behavior - one topic gets its logs cleaned up every 5 minutes, while the
> other one - does not get cleaned up at all 
>
>
> Is there anything else I could check, apart from what I already did - see
> the post below - to troubleshoot this?
>
>
> thank you!
> Marina
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Thursday, May 20, 2021 2:10 PM, Marina Popova 
> 
> wrote:
>
> > Hi, I have posted this question on SO:
> >
> https://stackoverflow.com/questions/67625641/kafka-segments-are-deleted-too-often-or-not-at-all
> > but wanted to re-post here as well in case someone spots the issue right
> away 
> >
> > Thank you for your help!
> >
> > > > > > >
> >
> > We have two topics on our Kafka cluster that exhibit weird (wrong)
> behavior related to retention configuration.
> >
> > One topic, tracking.ap.client.traffic, has retention set explicitly to "
> retention.ms=1440" (4 hrs) - but it is not cleaned up , grows in
> size, and caused 2 out of 3 kafka brokers to run out of disk space.
> (details about config and log below)
> >
> > Second topic, tracking.ap.client.traffic.keyed, is created in KSQL as a
> stream topic:
> >
> > CREATE STREAM AP_CLIENT_TRAFFIC_KEYED
> > WITH (KAFKA_TOPIC='tracking.ap.client.traffic.keyed',
> TIMESTAMP='ACTIVITY_DATE', VALUE_FORMAT='JSON', PARTITIONS=6)
> > AS SELECT
> > ...
> >
> >
> >
> > its retention is set to the default broker value, which is 720 hrs :
> >
> > cat /etc/kafka/server.properties | grep retention
> > log.retention.hours=720
> > # A size-based retention policy for logs. Segments are pruned from
> the log unless the remaining
> > # segments drop below log.retention.bytes. Functions independently
> of log.retention.hours.
> > #log.retention.bytes=1073741824
> > # to the retention policies
> > log.retention.check.interval.ms=30
> >
> >
> > This topic, though, gets cleaned up every 5 min or so - according to the
> logs
> > The log entry says the segment is marked for deletion "due to retention
> time 259200ms breach (kafka.log.Log)" - but how can that be true if
> this is happening every 5 min??
> >
> > No size-based retention is set for any of the topics.
> >
> > Two questions:
> >
> > 1.  why is the first topic not being cleaned p?
> > 2.  why is the second topic being cleaned up so often?
> >
> > Below are the details about logs and full config of both topics:
> >
> > log entries for tracking.ap.client.traffic.keyed-2 topic/partition -
> show that this partition is getting cleaned too often:
> >
> >
> > [2021-05-19 21:35:05,822] INFO [Log
> partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data]
> Incrementing log start offset to 11755700 (kafka.log.Log)
> > [2021-05-19 21:36:05,822] INFO [Log
> partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data]
> Deleting segment 11753910 (kafka.log.Log)
> > [2021-05-19 21:36:05,825] INFO Deleted log
> /apps/kafka-data/tracking.ap.client.traffic.keyed-2/11753910.log.deleted.
> (kafka.log.LogSegment)
> > [2021-05-19 21:36:05,827] INFO Deleted offset index
> /apps/kafka-data/tracking.ap.client.traffic.keyed-2/11753910.index.deleted.
> (kafka.log.LogSegment)
> > [2021-05-19 21:36:05,829] INFO Deleted time index
> /apps/kafka-data/tracking.ap.client.traffic.keyed-2/11753910.timeindex.deleted.
> (kafka.log.LogSegment)
> > [2021-05-19 21:40:05,838] INFO [Log
> partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data] Found
> deletable segments with base offsets [11755700] due to retention time
> 259200ms breach (kafka.log.Log)
> > [2021-05-19 21:40:05,843] INFO [ProducerStateManager
> partition=tracking.ap.client.traffic.keyed-2] Writing producer snapshot at
> offset 11757417 (kafka.log.ProducerStateManager)
> > [2021-05-19 21:40:05,845] INFO [Log
> partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data] Rolled
> new log segment at offset 11757417 in 7 ms. (kafka.log.Log)
> > [2021-05-19 21:40:05,845] INFO [Log
> partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data]
> Scheduling log segment [baseOffset 11755700, size 936249] for deletion.
> (kafka.log.Log)
> > [2021-05-19 21:40:05,847] INFO [Log
> partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data]
> Incrementing log start offset to 11757417 (kafka.log.Log)
> > [2021-05-19 21:41:05,848] INFO [Log
> partition=tracking.ap.client.traffic.keyed-2, dir=/apps/kafka-data]
> Deleting segment 11755700 (kafka.log.Log)
> > [2021-05-19 21:41:05,850] INFO Deleted log
> 

Re: kafka command is not working

2021-05-17 Thread Ran Lupovich
In bootstrap server put at least two brokers nodes of the cluster to get
the metadata of the initial connection

בתאריך יום ב׳, 17 במאי 2021, 18:07, מאת Aniket Pant ‏:

> Hi team,
> my question is , i have 3 nodes of kafka cluster and when i stop one broker
> i cannot lag messages it show me error like
> ```
> [2021-05-17 19:20:22,583] WARN [AdminClient clientId=adminclient-1]
> Connection to node -1 (localhost/xx.xx.xx.xx:9092) could not be
> established. Broker may not be available.
> (org.apache.kafka.clients.NetworkClient)
> [2021-05-17 19:20:22,685] WARN [AdminClient clientId=adminclient-1]
> Connection to node -1  (localhost/xx.xx.xx.xx:9092) could not be
> established. Broker may not be available.
> (org.apache.kafka.clients.NetworkClient)
> [2021-05-17 19:20:22,887] WARN [AdminClient clientId=adminclient-1]
> Connection to node -1  (localhost/xx.xx.xx.xx:9092) could not be
> established. Broker may not be available.
> (org.apache.kafka.clients.NetworkClient)
> [2021-05-17 19:20:23,189] WARN [AdminClient clientId=adminclient-1]
> Connection to node -1  (localhost/xx.xx.xx.xx:9092) could not be
> established. Broker may not be available.
> (org.apache.kafka.clients.NetworkClient)
> [2021-05-17 19:20:23,692] WARN [AdminClient clientId=adminclient-1]
> Connection to node -1  (localhost/xx.xx.xx.xx:9092) could not be
> established. Broker may not be available.
> (org.apache.kafka.clients.NetworkClient)
> [2021-05-17 19:20:24,395] WARN [AdminClient clientId=adminclient-1]
> Connection to node -1  (localhost/xx.xx.xx.xx:9092) could not be
> established. Broker may not be available.
> (org.apache.kafka.clients.NetworkClient)
> [2021-05-17 19:20:25,499] WARN [AdminClient clientId=adminclient-1]
> Connection to node -1  (localhost/xx.xx.xx.xx:9092) could not be
> established. Broker may not be available.
> (org.apache.kafka.clients.NetworkClient)
> [2021-05-17 19:20:26,704] WARN [AdminClient clientId=adminclient-1]
> Connection to node -1  (localhost/xx.xx.xx.xx:9092) could not be
> established. Broker may not be available.
> (org.apache.kafka.clients.NetworkClient)
> Error: Executing consumer group command failed due to
> org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a
> node assignment.
> java.util.concurrent.ExecutionException:
> org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a
> node assignment.
> at
>
> org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
> at
>
> org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
> at
>
> org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
> at
>
> org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:260)
> at
>
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.collectGroupOffsets(ConsumerGroupCommand.scala:331)
> at
>
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService.describeGroup(ConsumerGroupCommand.scala:251)
> at
> kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:59)
> at
> kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
> Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out
> waiting for a node assignment.
> ```
> I was running `bin/kafka-consumer-groups.sh --describe --group pfsense_user
> --all-topics --bootstrap-server xx.xx.xx.xx:9092'
>


Re: Kafka SSL

2021-04-30 Thread Ran Lupovich
https://docs.confluent.io/platform/current/kafka/authentication_ssl.html

Check this out

בתאריך יום ו׳, 30 באפר׳ 2021, 20:06, מאת Ran Lupovich ‏<
ranlupov...@gmail.com>:

> Hi seems you setup in port 9093 only ssl as a method of authentication and
> method of transfer encryption,  so it means in the client configuration you
> would need the keystore configured as well, you could choose other mean of
> authentication such as PLAIN_SSL or so own, hope thats helps, keep us
> updated,  good luck
>
> בתאריך יום ו׳, 30 באפר׳ 2021, 19:27, מאת Calvin Chen ‏<
> pingc...@hotmail.com>:
>
>> Hi all
>>
>> I'm working on Kafka(kafka_2.13-2.7.0)cluster with SSL enabled, and I
>> need help on Kafka broker config(I got error of connection failed) and
>> client SSL config(I got error of SSL handshake failed).
>>
>>
>> I setup Kafka and client SSL config by taking reference of
>> Apache Kafka<https://kafka.apache.org/documentation/#security_ssl>
>> Apache Kafka TLS encryption & authentication - Azure HDInsight |
>> Microsoft Docs<
>> https://docs.microsoft.com/en-us/azure/hdinsight/kafka/apache-kafka-ssl-encryption-authentication
>> >
>>
>> And I can verify my Kafka cluster SSL with below command:
>>
>> openssl s_client -debug -connect sc2-kafka-dev-001_node-1:9093 -tls1_2
>>
>> some output is:
>>
>> Server certificate
>> -BEGIN CERTIFICATE-
>> MIID1TCCAb0CFGy5db0MHYKTnZZAQpnHsR3ywrsqMA0GCSqGSIb3DQEBCwUAMBwx
>> GjAYBgNVBAMMEUthZmthLVNlY3VyaXR5LUNBMB4XDTIxMDQzMDE0NDEzMVoXDTIy
>> MDQzMDE0NDEzMVowMjEwMC4GA1UEAwwnc2MyLWthZmthLWRldi0wMDFfbm9kZS0x
>> LmVuZy52bXdhcmUuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
>> wuL14qBmI++Ii/lxLU32TlGd0VlDX29JXjyqEUoaXDjYBroY5+FDhawladB3YU3/
>> IY2fQ9PHoPLVntBnMMf29m8buVFKXsRT0mOjkyVuUUZcp0L9mLMKnKE1Rn+EJM93
>> Ys0A8/YJgp3LYu0cbLbqw9TUdFkyesaV5zqAXse14npi0eqXk5pk5ss2ePfqa6bN
>> m2zM1eZrJjjp1vFx0oL8N6z2z6+AS67unyj9x2SjyXQgigbnz36VM99EUeMeQLuz
>> weuZN97sKKW4ub+ya0R6lbS5pum+iQ4ukA9TeiXllqwoFZTEZistsbec5OvgVgC0
>> 41I6rtlGdqkAPEyU8xtfnwIDAQABMA0GCSqGSIb3DQEBCwUAA4ICAQAdTBndO51t
>> IK40oYHf2dWHE4WPvZfDoQpAVwhLptsbQD4RVdpPUxagbh4F4zAFwIZgCpwU0YBz
>> sq71p45x/3NjX40eIWsC0WgQoCQsCWimXQSMOltopNEhrSICd7mD1H/C1uftNXU1
>> uAGRGUC8wgX1ULdHLg0Szvz519ia+uZqOKyzsMBDZnmtesli3lTmXjjO5E5aPLaU
>> ztLeZrhHzR7ib9ZtIidl4hviPKbdLBPkeBqk7b821RbCK1Ny8eSOBYY3wePqTGU3
>> LbLEEeFgNBr9wEsmEcr237QW4UrYX5TjxeoykQj72u9tAb8mTrAY8QXUo9f826hQ
>> kTcSe504t6hMmX6oP9R3wUHqpIAZ3woqOV/I2KwCt2L3thUXyJK7F9XTSZQq89DT
>> E4SQlEthR+Mq/eIqyunq403MnQuxRGpfkiOLzBO1vUYDbnWjaC3oouTW9Y1rhF0L
>> t+DqaMXSTLyhcLZ8xUMcpgfROMArjufTsQ5KWqUYCTUffsrRVFzlyg02OjzgYJ5a
>> XR/lp64V3Ul1/8EM7QujDgdq9KTRu4FxuOk+8AFMOz4UJ1iqFONBKz6UTYmKjECw
>> aEp8k8WjuyHeuO5+d9qav+xYSQbHhZ5QSILKlyDSDkLWTjgNyvCMKzabtTW1HfQJ
>> p4DsCTjGse76yHJNAnH0jdGBVvi8ONdhuA==
>> -END CERTIFICATE-
>> subject=CN = sc2-kafka-dev-001_node-1.eng.vmware.com
>>
>> issuer=CN = Kafka-Security-CA
>>
>>
>> So when I see above output, does it means my SSL setup for Kafka broker
>> is ok?
>>
>>
>> However, I didn't get below keyword in server.log, as mentioned from
>> Kafka webpage, I should see below in server.log.
>>
>>
>> with addresses: PLAINTEXT -> EndPoint({{fqdn}},9092,PLAINTEXT),SSL ->
>> EndPoint({{fqdn}},9093,SSL)
>>
>> My two server.log output are:
>>
>> [2021-04-30 09:05:08,954] INFO [KafkaServer id=1] started
>> (kafka.server.KafkaServer)
>>
>> While another one is:
>>
>> [2021-04-30 09:05:30,183] WARN [Controller id=2, targetBrokerId=1]
>> Connection to node 1 (
>> sc2-kafka-dev-001_node-1.eng.vmware.com/10.185.50.10:9093) could not be
>> established. Broker may not be available.
>> (org.apache.kafka.clients.NetworkClient)
>> [2021-04-30 09:05:30,311] WARN [Controller id=2, targetBrokerId=3]
>> Connection to node 3 (
>> sc2-kafka-dev-001_node-3.eng.vmware.com/10.185.50.12:9093) could not be
>> established. Broker may not be available.
>> (org.apache.kafka.clients.NetworkClient)
>>
>> It looks like the Kafka cluster with SSL enabled has some problem on
>> setup connection across brokers. BTW, I haven't apply for the DNS record
>> for my brokers, I setup domain name in /etc/hosts, and it shall be ok for
>> the test?
>>
>>
>> Also, when I test Kafka command line with SSL config, I see auth error,
>> but I didn't config auth, I just config ssl encryption:
>>
>> [worker@sc2-kafka-dev-001_node-1 client]$
>> /opt/kafka/kafka_2.13-2.7.0/bin/kafka-console-producer.sh --broker-list
>> sc

Re: Kafka SSL

2021-04-30 Thread Ran Lupovich
Hi seems you setup in port 9093 only ssl as a method of authentication and
method of transfer encryption,  so it means in the client configuration you
would need the keystore configured as well, you could choose other mean of
authentication such as PLAIN_SSL or so own, hope thats helps, keep us
updated,  good luck

בתאריך יום ו׳, 30 באפר׳ 2021, 19:27, מאת Calvin Chen ‏:

> Hi all
>
> I'm working on Kafka(kafka_2.13-2.7.0)cluster with SSL enabled, and I need
> help on Kafka broker config(I got error of connection failed) and client
> SSL config(I got error of SSL handshake failed).
>
>
> I setup Kafka and client SSL config by taking reference of
> Apache Kafka
> Apache Kafka TLS encryption & authentication - Azure HDInsight | Microsoft
> Docs<
> https://docs.microsoft.com/en-us/azure/hdinsight/kafka/apache-kafka-ssl-encryption-authentication
> >
>
> And I can verify my Kafka cluster SSL with below command:
>
> openssl s_client -debug -connect sc2-kafka-dev-001_node-1:9093 -tls1_2
>
> some output is:
>
> Server certificate
> -BEGIN CERTIFICATE-
> MIID1TCCAb0CFGy5db0MHYKTnZZAQpnHsR3ywrsqMA0GCSqGSIb3DQEBCwUAMBwx
> GjAYBgNVBAMMEUthZmthLVNlY3VyaXR5LUNBMB4XDTIxMDQzMDE0NDEzMVoXDTIy
> MDQzMDE0NDEzMVowMjEwMC4GA1UEAwwnc2MyLWthZmthLWRldi0wMDFfbm9kZS0x
> LmVuZy52bXdhcmUuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
> wuL14qBmI++Ii/lxLU32TlGd0VlDX29JXjyqEUoaXDjYBroY5+FDhawladB3YU3/
> IY2fQ9PHoPLVntBnMMf29m8buVFKXsRT0mOjkyVuUUZcp0L9mLMKnKE1Rn+EJM93
> Ys0A8/YJgp3LYu0cbLbqw9TUdFkyesaV5zqAXse14npi0eqXk5pk5ss2ePfqa6bN
> m2zM1eZrJjjp1vFx0oL8N6z2z6+AS67unyj9x2SjyXQgigbnz36VM99EUeMeQLuz
> weuZN97sKKW4ub+ya0R6lbS5pum+iQ4ukA9TeiXllqwoFZTEZistsbec5OvgVgC0
> 41I6rtlGdqkAPEyU8xtfnwIDAQABMA0GCSqGSIb3DQEBCwUAA4ICAQAdTBndO51t
> IK40oYHf2dWHE4WPvZfDoQpAVwhLptsbQD4RVdpPUxagbh4F4zAFwIZgCpwU0YBz
> sq71p45x/3NjX40eIWsC0WgQoCQsCWimXQSMOltopNEhrSICd7mD1H/C1uftNXU1
> uAGRGUC8wgX1ULdHLg0Szvz519ia+uZqOKyzsMBDZnmtesli3lTmXjjO5E5aPLaU
> ztLeZrhHzR7ib9ZtIidl4hviPKbdLBPkeBqk7b821RbCK1Ny8eSOBYY3wePqTGU3
> LbLEEeFgNBr9wEsmEcr237QW4UrYX5TjxeoykQj72u9tAb8mTrAY8QXUo9f826hQ
> kTcSe504t6hMmX6oP9R3wUHqpIAZ3woqOV/I2KwCt2L3thUXyJK7F9XTSZQq89DT
> E4SQlEthR+Mq/eIqyunq403MnQuxRGpfkiOLzBO1vUYDbnWjaC3oouTW9Y1rhF0L
> t+DqaMXSTLyhcLZ8xUMcpgfROMArjufTsQ5KWqUYCTUffsrRVFzlyg02OjzgYJ5a
> XR/lp64V3Ul1/8EM7QujDgdq9KTRu4FxuOk+8AFMOz4UJ1iqFONBKz6UTYmKjECw
> aEp8k8WjuyHeuO5+d9qav+xYSQbHhZ5QSILKlyDSDkLWTjgNyvCMKzabtTW1HfQJ
> p4DsCTjGse76yHJNAnH0jdGBVvi8ONdhuA==
> -END CERTIFICATE-
> subject=CN = sc2-kafka-dev-001_node-1.eng.vmware.com
>
> issuer=CN = Kafka-Security-CA
>
>
> So when I see above output, does it means my SSL setup for Kafka broker is
> ok?
>
>
> However, I didn't get below keyword in server.log, as mentioned from Kafka
> webpage, I should see below in server.log.
>
>
> with addresses: PLAINTEXT -> EndPoint({{fqdn}},9092,PLAINTEXT),SSL ->
> EndPoint({{fqdn}},9093,SSL)
>
> My two server.log output are:
>
> [2021-04-30 09:05:08,954] INFO [KafkaServer id=1] started
> (kafka.server.KafkaServer)
>
> While another one is:
>
> [2021-04-30 09:05:30,183] WARN [Controller id=2, targetBrokerId=1]
> Connection to node 1 (
> sc2-kafka-dev-001_node-1.eng.vmware.com/10.185.50.10:9093) could not be
> established. Broker may not be available.
> (org.apache.kafka.clients.NetworkClient)
> [2021-04-30 09:05:30,311] WARN [Controller id=2, targetBrokerId=3]
> Connection to node 3 (
> sc2-kafka-dev-001_node-3.eng.vmware.com/10.185.50.12:9093) could not be
> established. Broker may not be available.
> (org.apache.kafka.clients.NetworkClient)
>
> It looks like the Kafka cluster with SSL enabled has some problem on setup
> connection across brokers. BTW, I haven't apply for the DNS record for my
> brokers, I setup domain name in /etc/hosts, and it shall be ok for the test?
>
>
> Also, when I test Kafka command line with SSL config, I see auth error,
> but I didn't config auth, I just config ssl encryption:
>
> [worker@sc2-kafka-dev-001_node-1 client]$
> /opt/kafka/kafka_2.13-2.7.0/bin/kafka-console-producer.sh --broker-list
> sc2-kafka-dev-001_node-1:9093 --topic topic1 --producer.config
> ./client-ssl.properties
> >[2021-04-30 09:11:19,574] ERROR [Producer clientId=console-producer]
> Connection to node -1 (sc2-kafka-dev-001_node-1/10.185.50.10:9093) failed
> authentication due to: SSL handshake failed
> (org.apache.kafka.clients.NetworkClient)
> [2021-04-30 09:11:19,575] WARN [Producer clientId=console-producer]
> Bootstrap broker sc2-kafka-dev-001_node-1:9093 (id: -1 rack: null)
> disconnected (org.apache.kafka.clients.NetworkClient)
>
>
> Here is my part of Kafka broker config:
>
> listeners=PLAINTEXT://sc2-kafka-dev-001_node-2.eng.vmware.com:9092, SSL://
> sc2-kafka-dev-001_node-2.eng.vmware.com:9093
> advertised.listeners=PLAINTEXT://
> sc2-kafka-dev-001_node-2.eng.vmware.com:9092, SSL://
> sc2-kafka-dev-001_node-2.eng.vmware.com:9093
>
> ssl.endpoint.identification.algorithm=
> 

Re: Standard way to get http POST request into a Kafka topic?

2021-04-28 Thread Ran Lupovich
Btw. Just now accomplished a working poc in dev using wso2 ,  confluent
rest proxy , confluent schema registry,  kafka

Produce message to kafka via post http rest request

בתאריך יום ד׳, 28 באפר׳ 2021, 06:42, מאת Ran Lupovich ‏<
ranlupov...@gmail.com>:

> Hi, have a look for Rest Proxy component as part of the kafka eco system
>
> בתאריך יום ד׳, 28 באפר׳ 2021, 01:27, מאת Reed Villanueva ‏<
> villanuevar...@gmail.com>:
>
>> What is the best-practice/kafka way to get http(s) POST requests into a
>> Kafka topic (kafka v2.0.0 installed on a HDP cluster)?
>> Have never used kafka before and would like to know the best way that this
>> should be done.
>> Basically, we have a public URL that is going to receive requests from a
>> specific external URL based on event hooks (
>> https://developers.acuityscheduling.com/docs/webhooks) and I want to get
>> these requests into a kafka topic.
>> I've seen this (
>> https://docs.confluent.io/3.0.0/kafka-rest/docs/intro.html),
>> but am a bit confused (again, have never used kafka before). Will there
>> need to be an always-on producer to read from these event hooks to produce
>> into a topic? What is the best practice way to do this to account for
>> whatever common fault tolerances that should be built into a kafka
>> producer
>> for this kind of live event feed? No way to just automatically dump the
>> requests into the topic (and avoid having to ensure such a simple
>> forwarding producer is always alive (and thus not forever missing the data
>> that came in during that downtime))?
>>
>> Thank you
>>
>


Re: Standard way to get http POST request into a Kafka topic?

2021-04-27 Thread Ran Lupovich
Hi, have a look for Rest Proxy component as part of the kafka eco system

בתאריך יום ד׳, 28 באפר׳ 2021, 01:27, מאת Reed Villanueva ‏<
villanuevar...@gmail.com>:

> What is the best-practice/kafka way to get http(s) POST requests into a
> Kafka topic (kafka v2.0.0 installed on a HDP cluster)?
> Have never used kafka before and would like to know the best way that this
> should be done.
> Basically, we have a public URL that is going to receive requests from a
> specific external URL based on event hooks (
> https://developers.acuityscheduling.com/docs/webhooks) and I want to get
> these requests into a kafka topic.
> I've seen this (https://docs.confluent.io/3.0.0/kafka-rest/docs/intro.html
> ),
> but am a bit confused (again, have never used kafka before). Will there
> need to be an always-on producer to read from these event hooks to produce
> into a topic? What is the best practice way to do this to account for
> whatever common fault tolerances that should be built into a kafka producer
> for this kind of live event feed? No way to just automatically dump the
> requests into the topic (and avoid having to ensure such a simple
> forwarding producer is always alive (and thus not forever missing the data
> that came in during that downtime))?
>
> Thank you
>


Re: JDBC Sink Connector: Counterpart to Dead Letter Queue to keep track of successfully processed records

2021-04-27 Thread Ran Lupovich
Just a thought , what about jdbc source connector from the database to the
succsess topic?

בתאריך יום ד׳, 28 באפר׳ 2021, 00:19, מאת Florian McKee ‏<
florian.mc...@gmail.com>:

> Hi,
>
>
> I want to ingest messages that are sent by 3rd parties into our system.
>
>
> The ingestion process is as follows:
>
>- verify message content
>- forward invalid messages to ingestion_failure topic
>- persist valid messages in PostgreSQL
>- forward messages that have been persisted to ingestion_success topic
>
>
> The last point is key: Only forward messages to ingestion_successful if
> they have been persisted in PostgreSQL. Is there any way I can do that with
> a JDBC sink connector?
>
> I'm basically looking for the counterpart of the Dead Letter Queue for
> records that have been processed successfully by the connector.
>
> If that's beyond the scope of connectors: Any recommendations on how to do
> that any other way?
>
>
>
> Thanks
> Florian
>


Re: How to emit lag into Prometheus?

2021-04-23 Thread Ran Lupovich
Did you test this out
https://github.com/lightbend/kafka-lag-exporter
?

We are using bash scripting to describe the consumer groups from the
cluster side into file and ingest to splunk for dashboarsing...

Note - Approach of describing from outside the consumer does not give the
'real time' view but only the 'comitted' view

בתאריך יום ו׳, 23 באפר׳ 2021, 12:20, מאת Dumitru Nicolae Marasoiu ‏<
nicolae.maras...@gmail.com>:

> Hi, what is suggested way to emit lag into Prometheus? Is there a Kafka
> Streams job for it? Is Burrow or Kafka Lag Exporter?
> Emitting lag from the consumer is not a great option of course, so looking
> at independent jobs that emit lag.
> Do you know any? Thank you
>


Re: [kafka-clients] Re: [ANNOUNCE] Apache Kafka 2.8.0

2021-04-19 Thread Ran Lupovich
Hello, Maybe I missed it in the documentations but where can I read about
what is the future plan for the zookeeper managed ACLs ?

בתאריך יום ב׳, 19 באפר׳ 2021, 22:48, מאת Israel Ekpo ‏:

> This is fantastic news!
>
> Thanks everyone for contributing and thanks John for managing the release.
>
> On Mon, Apr 19, 2021 at 1:10 PM Guozhang Wang  wrote:
>
> > This is great! Thanks to everyone who has contributed to the release.
> >
> > On Mon, Apr 19, 2021 at 9:36 AM John Roesler 
> wrote:
> >
> >> The Apache Kafka community is pleased to announce the
> >> release for Apache Kafka 2.8.0
> >>
> >> Kafka 2.8.0 includes a number of significant new features.
> >> Here is a summary of some notable changes:
> >>
> >> * Early access of replace ZooKeeper with a self-managed
> >> quorum
> >> * Add Describe Cluster API
> >> * Support mutual TLS authentication on SASL_SSL listeners
> >> * JSON request/response debug logs
> >> * Limit broker connection creation rate
> >> * Topic identifiers
> >> * Expose task configurations in Connect REST API
> >> * Update Streams FSM to clarify ERROR state meaning
> >> * Extend StreamJoined to allow more store configs
> >> * More convenient TopologyTestDriver construtors
> >> * Introduce Kafka-Streams-specific uncaught exception
> >> handler
> >> * API to start and shut down Streams threads
> >> * Improve TimeWindowedDeserializer and TimeWindowedSerde to
> >> handle window size
> >> * Improve timeouts and retries in Kafka Streams
> >>
> >> All of the changes in this release can be found in the
> >> release notes:
> >> https://www.apache.org/dist/kafka/2.8.0/RELEASE_NOTES.html
> >>
> >>
> >> You can download the source and binary release (Scala 2.12
> >> and 2.13) from:
> >> https://kafka.apache.org/downloads#2.8.0
> >>
> >> 
> >> ---
> >>
> >>
> >> Apache Kafka is a distributed streaming platform with four
> >> core APIs:
> >>
> >>
> >> ** The Producer API allows an application to publish a
> >> stream records to one or more Kafka topics.
> >>
> >> ** The Consumer API allows an application to subscribe to
> >> one or more topics and process the stream of records
> >> produced to them.
> >>
> >> ** The Streams API allows an application to act as a stream
> >> processor, consuming an input stream from one or more topics
> >> and producing an output stream to one or more output topics,
> >> effectively transforming the input streams to output
> >> streams.
> >>
> >> ** The Connector API allows building and running reusable
> >> producers or consumers that connect Kafka topics to existing
> >> applications or data systems. For example, a connector to a
> >> relational database might capture every change to a table.
> >>
> >>
> >> With these APIs, Kafka can be used for two broad classes of
> >> application:
> >>
> >> ** Building real-time streaming data pipelines that reliably
> >> get data between systems or applications.
> >>
> >> ** Building real-time streaming applications that transform
> >> or react to the streams of data.
> >>
> >>
> >> Apache Kafka is in use at large and small companies
> >> worldwide, including Capital One, Goldman Sachs, ING,
> >> LinkedIn, Netflix, Pinterest, Rabobank, Target, The New York
> >> Times, Uber, Yelp, and Zalando, among others.
> >>
> >> A big thank you for the following 128 contributors to this
> >> release!
> >>
> >> 17hao, abc863377, Adem Efe Gencer, Alexander Iskuskov, Alok
> >> Nikhil, Anastasia Vela, Andrew Lee, Andrey Bozhko, Andrey
> >> Falko, Andy Coates, Andy Wilkinson, Ankit Kumar, APaMio,
> >> Arjun Satish, ArunParthiban-ST, A. Sophie Blee-Goldman,
> >> Attila Sasvari, Benoit Maggi, bertber, bill, Bill Bejeck,
> >> Bob Barrett, Boyang Chen, Brajesh Kumar, Bruno Cadonna,
> >> Cheng Tan, Chia-Ping Tsai, Chris Egerton, CHUN-HAO TANG,
> >> Colin Patrick McCabe, Colin P. Mccabe, Cyrus Vafadari, David
> >> Arthur, David Jacot, David Mao, dengziming, Dhruvil Shah,
> >> Dima Reznik, Dongjoon Hyun, Dongxu Wang, Emre Hasegeli,
> >> feyman2016, fml2, Gardner Vickers, Geordie, Govinda Sakhare,
> >> Greg Harris, Guozhang Wang, Gwen Shapira, Hamza Slama,
> >> high.lee, huxi, Igor Soarez, Ilya Ganelin, Ismael Juma, Ivan
> >> Ponomarev, Ivan Yurchenko, jackyoh, James Cheng, James
> >> Yuzawa, Jason Gustafson, Jesse Gorzinski, Jim Galasyn, John
> >> Roesler, Jorge Esteban Quilcate Otoya, José Armando García
> >> Sancio, Julien Chanaud, Julien Jean Paul Sirocchi, Justine
> >> Olshan, Kengo Seki, Kowshik Prakasam, leah, Lee Dongjin,
> >> Levani Kokhreidze, Lev Zemlyanov, Liju John, Lincong Li,
> >> Lucas Bradstreet, Luke Chen, Manikumar Reddy, Marco Aurelio
> >> Lotz, mathieu, Matthew Wong, Matthias J. Sax, Matthias
> >> Merdes, Michael Bingham, Michael G. Noll, Mickael Maison,
> >> Montyleo, mowczare, Nikolay, Nikolay Izhikov, Ning Zhang,
> >> Nitesh Mor, Okada Haruki, panguncle, parafiend, Patrick
> >> Dignan, Prateek Agarwal, Prithvi, Rajini Sivaram, Raman
> >> 

Re: options for kafka cluster backup?

2021-03-05 Thread Ran Lupovich
I guess that in case of avoiding data lose you would need to use 3 replica
in different rack/sites awareness to avoid data lose, Confluent's
Replicator or MirrorMaker are for copying data from one cluster to another
usually in different dc / regions, If I am not mistaken

בתאריך יום ו׳, 5 במרץ 2021, 11:21, מאת Pushkar Deole ‏:

> Thanks Luke... is the mirror maker asynchronous? What will be typical lag
> between the replicated cluster and running cluster and in case of disaster,
> what are the chances of data loss?
>
> On Fri, Mar 5, 2021 at 11:37 AM Luke Chen  wrote:
>
> > Hi Pushkar,
> > MirrorMaker is what you're looking for.
> > ref: https://kafka.apache.org/documentation/#georeplication-mirrormaker
> >
> > Thanks.
> > Luke
> >
> > On Fri, Mar 5, 2021 at 1:50 PM Pushkar Deole 
> wrote:
> >
> > > Hi All,
> > >
> > > I was looking for some options to backup a running kafka cluster, for
> > > disaster recovery requirements. Can someone provide what are the
> > available
> > > options to backup and restore a running cluster in case the entire
> > cluster
> > > goes down?
> > >
> > > Thanks..
> > >
> >
>