> On May 17, 2020, at 11:45 AM, Victoria Zuberman 
> <victoria.zuber...@imperva.com> wrote:
> 
>  Regards acks=all:
> -------------------------
> Interesting point. Will check acks and min.insync.replicas values.
> If I understand the root cause that you are suggesting correctly, given my 
> RF=2 and 3 brokers in cluster:
> min.insync.replicas > 1 and acks=all, removing one broker -------> partition 
> that had a replica on the removed broker can't get written until the replica 
> is up on another broker?

That is correct. From a producer standpoint, the unaffected partitions will 
still be able to accept data, so depending on data rate and message size, 
producers may not be negatively affected by the missing broker.

> Regards number of partitions
> -----------------------------------------
> The producer to this topic is using librdkafka, using partioner_cb callback, 
> which receives number of partition as partitions_cnt.

This makes sense, when called, you will get partitions that are able to accept 
data. When a broker goes down and some topics become under-replicated, and your 
producer settings omit the remaining replicas of those partitions as valid 
targets, then partitions_cnt will only enumerate the remaining partitions.

> Still trying to understand how the library obtains partitions_cnt value.
> I wonder if the behavior is similar to Java library, where it the default 
> partitioner uses the number of available partitions as the number of current 
> partitions...

The logic is similar as that is how kafka is designed. The client will fetch 
the topic’s metadata (including partitions available for writing) on connect, 
on error, and by the interval determined by topic.metadata.refresh.interval.ms, 
unless it is set to -1.

> On 17/05/2020, 20:59, "Peter Bukowinski" <pmb...@gmail.com> wrote:
> 
> 
>    If your producer is set to use acks=all, then it won’t be able to produce 
> to the topic topic partitions that had replicas on the missing broker until 
> the replacement broker has finished catching up to be included in the ISR.
> 
>    What method are you using that reports on the number of topic partitions? 
> If some partitions go offline, the cluster still knows how many there are 
> supposed to be, so I’m curious what is reporting 10 when there should be 15.
> 
>    -- Peter
> 
>> On May 17, 2020, at 10:36 AM, Victoria Zuberman 
>> <victoria.zuber...@imperva.com> wrote:
>> 
>> Hi,
>> 
>> Kafka cluster with 3 brokers, version 1.0.1.
>> Topic with 15 partitions, replication factor 2. All replicas in sync.
>> Bringing down one of the brokers (ungracefully), then adding a broker in 
>> version 1.0.1
>> 
>> During this process, are we expected either of the following to happen:
>> 
>> 1.  Some of the partitions become unavailable for producer to write to
>> 2.  Cluster reports the number of partitions at the topic as 10 and not 15
>> It seems like both issues take place in our case, for about a minute.
>> 
>> We are trying to understand whether it is an expected behavior and if not, 
>> what can be causing it.
>> 
>> Thanks,
>> Victoria
>> -------------------------------------------
>> NOTICE:
>> This email and all attachments are confidential, may be proprietary, and may 
>> be privileged or otherwise protected from disclosure. They are intended 
>> solely for the individual or entity to whom the email is addressed. However, 
>> mistakes sometimes happen in addressing emails. If you believe that you are 
>> not an intended recipient, please stop reading immediately. Do not copy, 
>> forward, or rely on the contents in any way. Notify the sender and/or 
>> Imperva, Inc. by telephone at +1 (650) 832-6006 and then delete or destroy 
>> any copy of this email and its attachments. The sender reserves and asserts 
>> all rights to confidentiality, as well as any privileges that may apply. Any 
>> disclosure, copying, distribution or action taken or omitted to be taken by 
>> an unintended recipient in reliance on this message is prohibited and may be 
>> unlawful.
>> Please consider the environment before printing this email.
> 
> 

Reply via email to