Thanks, Peter for the quick response.
You may need to enable unclean leader election, as well. - We already set
unclean leader election to true. Unfortunately both brokers went down
within 10 minutes of time. Another option is to increase the replication
factor to 3 from 2. Right now the goal is
If the only replicas for that topic partition exist on brokers 15 and 24 and
they are both down, then you cannot recover the partition until either of them
is replaced or repaired and rejoins the cluster. You may need to enable unclean
leader election, as well.
As you’ve discovered, adding
Hi Experts, We have seen a problem with partition leader i.e it's set to -1.
describe o/p:
Topic: 1453 Partition: 47 Leader: -1 Replicas: 24,15 Isr: 24
Kafka Version: 2.2.0
Replication: 2
Partitions: 48
Brokers 24 ,15 both are down due to disk errors and we lost the partition
47. I tried
I have not found relying on partitions for parallelism as a disadvantage.
At flurry, we have several pipelines using both lower level API Kafka (for
legacy reasons) and kafka streams + kafka connect.
They process over 10B events per day at around 200k rps. We also use the
same system to send over
Hello Javier,
When a rebalance happened and the new tasks (hence input partitions) are
assigned that need to be restored, the state of the instance would also
transit to REBALANCING, and would only be transit back to RUNNING after all
tasks have been completed restoring and all are being
Hi,
I understand that the state store listener that can be set using
KafkaStreams.setGlobalStateRestoreListener will be invoked when the streams
app starts if it doesn't find the state locally (e.g., running on a
ephemeral docker container).
However, I wonder if the process happens as well if
Rajeev, the config errors are unavoidable at present and can be ignored or
silenced. The Plugin error is concerning, and was previously described by
Vishal. I suppose it's possible there is a dependency conflict in these
builds. Can you send me the hash that you're building from? I'll try to
Consider using Flink, Spark, or Samza instead.
Ryanne
On Fri, Nov 8, 2019, 4:27 AM Debraj Manna wrote:
> Hi
>
> Is there any documentation or link I can refer to for the steps for
> deploying the Kafka Streams application in YARN?
>
> Kafka Client - 0.11.0.3
> Kafka Broker - 2.2.1
> YARN -
Sachin, assuming you are using something like MM2, I recommend the
following approaches:
1) have an external system monitor the clusters and trigger a failover by
terminating the existing consumer group and launching a replacement. This
can be done manually or can be automated if your
I belive the behavior has changed over time. There is a way to explicitly set
a practitioner and they provide:
https://github.com/axbaretto/kafka/blob/master/clients/src/main/java/org/apache/kafka/clients/producer/RoundRobinPartitioner.java
On 11/10/19, 5:45 AM, "Oliver Eckle" wrote:
Hi
You have a typo - you mean deserializer
Please try again.
Regards,
On Mon, 11 Nov 2019 at 14:28, Jorg Heymans wrote:
> Don't think that option is available there, specifying
> 'value.deserializer' in my consumer-config.properties file gives
>
> [2019-11-11 15:16:26,589] WARN The configuration
Don't think that option is available there, specifying 'value.deserializer' in
my consumer-config.properties file gives
[2019-11-11 15:16:26,589] WARN The configuration 'value.serializer' was
supplied but isn't a known config.
(org.apache.kafka.clients.consumer.ConsumerConfig)
Does there
Hi,
On Mon, 11 Nov 2019 at 11:55, Sachin Kale wrote:
> Hi,
>
> We are working on a prototype where we write to two Kafka cluster
> (primary-secondary) and read from one of them (based on which one is
> primary) to increase the availability. There is a flag which is used to
> determine which
Hi,
On Mon, 11 Nov 2019 at 10:58, Jorg Heymans wrote:
> Hi,
>
> I have created a class implementing Deserializer, providing an
> implementation for
>
> public String deserialize(String topic, Headers headers, byte[] data)
>
> that does some conditional processing based on headers, and then
Hi,
We are working on a prototype where we write to two Kafka cluster
(primary-secondary) and read from one of them (based on which one is
primary) to increase the availability. There is a flag which is used to
determine which cluster is primary and other becomes secondary. On
detecting primary
Hi,
I have created a class implementing Deserializer, providing an implementation
for
public String deserialize(String topic, Headers headers, byte[] data)
that does some conditional processing based on headers, and then calls the
other serde method
public String deserialize(String topic,
16 matches
Mail list logo