Kafka Ate my Data!

2021-06-17 Thread Jhanssen Fávaro
Hi all, we were testing kafka disaster/recover in our Sites. Anyway do avoid the scenario in this post ? https://blog.softwaremill.com/help-kafka-ate-my-data-ae2e5d3e6576 But, the Unclean Leader exception is not an option in our case. FYI.. We needed to deactivated our systemctl for kafka

Re: Kafka Ate My Data!

2021-06-17 Thread Ran Lupovich
Having setting as described above will tolerate one broker down without service outage, בתאריך יום ו׳, 18 ביוני 2021, 00:42, מאת Ran Lupovich ‏< ranlupov...@gmail.com>: > That's why you have 3 brokers in minimum for production, having > replication factor set to 3 , min.isr set to 2, having

Re: Kafka Ate My Data!

2021-06-17 Thread Ran Lupovich
That's why you have 3 brokers in minimum for production, having replication factor set to 3 , min.isr set to 2, having each broker on different rack , you could also use mm2 or replicator to copy data to other dc... בתאריך יום ו׳, 18 ביוני 2021, 00:33, מאת Jhanssen Fávaro ‏<

Re: Kafka Ate My Data!

2021-06-17 Thread Jhanssen Fávaro
Thats a disaster recovery simulation, we need to validate a way to avoid that in a disaster case/scenario!! I mean If I have a disaster and the servers got rebooted we need to prevent its kafka weaknes. Regards, Jhanssen Fávaro de Oliveira On Thu, Jun 17, 2021 at 6:30 PM Sunil Unnithan wrote:

Re: Kafka Ate My Data!

2021-06-17 Thread Sunil Unnithan
Why would you reboot all three brokers on same week/day? On Thu, Jun 17, 2021 at 5:26 PM Jhanssen Fávaro wrote: > Sunil, > Business needs... Anyway, if it was 2, we would face the same problem. For > example if the partition leader was the last one to be rebooted and then > got its disk

Re: Kafka Ate My Data!

2021-06-17 Thread Jhanssen Fávaro
Sunil, Business needs... Anyway, if it was 2, we would face the same problem. For example if the partition leader was the last one to be rebooted and then got its disk corrupted. The erase would happens the same way. Regrads, On 2021/06/17 21:23:40, Sunil Unnithan wrote: > Why isr=all? Why

Re: Kafka Ate My Data!

2021-06-17 Thread Sunil Unnithan
Why isr=all? Why not use min.isr=2 in this case? On Thu, Jun 17, 2021 at 5:11 PM Jhanssen Fávaro wrote: > Basically, if we have 3 brokers and the ISR == all, and in the case that a > leader partition broker was the last server that was restarted/rebooted, > and during its startup got a disk

Re: Kafka Ate My Data!

2021-06-17 Thread Jhanssen Fávaro
Basically, if we have 3 brokers and the ISR == all, and in the case that a leader partition broker was the last server that was restarted/rebooted, and during its startup got a disk corruption, all the followers will mark the topic as offline. So, If the last broker leader that got the

Kafka Ate My Data!

2021-06-17 Thread Jhanssen Fávaro
Hi all, we were testing kafka disaster/recover in our Sites. Anyway do avoid the scenario in this post ? https://blog.softwaremill.com/help-kafka-ate-my-data-ae2e5d3e6576 But, the Unclean Leader exception is not an option in our case. FYI.. We needed to deactivated our systemctl for kafka

vulnerabilities

2021-06-17 Thread Elvis-ch1
Hello, i apologize if this is not the right email address to report vulnerabilities to, couldn't find an email address here ( https://github.com/apache/kafka/security ) to report vulnerabilities, which is not usually the case. We happen to be using Kafka in our environment(source image=

Re: kafka mirror with TLS and SASL enabled

2021-06-17 Thread Ryanne Dolan
Calvin, that's an interesting idea. The motivation behind the current behavior is to only grant principals access to data they already have access to. If a principal can access data in one cluster, there's no harm in providing read access to the same data in another cluster. But you are right that

kafka mirror with TLS and SASL enabled

2021-06-17 Thread Calvin Chen
Hi all I have a question, does kafka mirror2.0 mirror kafka users(created by kafka-configs.sh dynamically) and kafka acls(topic/group)? I setup below fields in mirror config file, and I think kafka mirror2.0 should mirror users and acls(topic/group) into remote cluster, but I see only part of

Enabling TLS causes AEADBadTagException

2021-06-17 Thread Chris Baumgartner
Hello, I am working on Java code that sends data to Kafka. I am trying to configure TLS. I think I have created all of the keys and certs correctly. When I attempt to send a message to Kafka, I get the stacktrace below. I am stumped as to what is causing this. Has anyone else seen this before?