from this point forward
>>
>> seems to me something like this could be made part of the replicator
>> component (mirror maker, or whatever else you want to use) - if topic X
>> does not exist in destination, create it, reset initial offsets to match
>> sou
some kind of replication logic error -
for example when two replication instances are somehow launched for single
partition.
The best option here is just to stop replication process.
So the answer to your question is (3), but this scenario should never happen.
>
> On Thu, Dec 29, 2
quot; the same offset ?
The same way as Kafka handles concurrent produce requests for the same
partition - produce requests for partition are serialized.
If the next produce request “overlaps” with previous one, it fails.
>
>
> On Mon, Dec 26, 2016 at 4:52 AM, Andrey L. Neporada <
>
Hi all!
Suppose you have two Kafka clusters and want to replicate topics from primary
cluster to secondary one.
It would be very convenient for readers if the message offsets for replicated
topics would be the same as for primary topics.
As far as I know, currently there is no way to achieve
since the vote started and there are 3
> binding +1 votes (and 3 non-binding +1 votes), so you are free to declare
> the vote as passed whenever you're ready. :)
>
> Will you be able to update the PR to match the KIP soon?
>
> Thanks,
> Ismael
>
> On Mon, Aug 22, 2016
ord <tcrayf...@heroku.com> wrote:
>
>> +1 (non binding)
>>
>> On Fri, Aug 19, 2016 at 6:20 AM, Manikumar Reddy <
>> manikumar.re...@gmail.com>
>> wrote:
>>
>>> +1 (non-binding)
>>>
>>> This feature help us control memory footprint and
7 Aug 2016, at 00:02, Jun Rao <j...@confluent.io> wrote:
>
> Andrey,
>
> Thanks for the KIP. +1
>
> Jun
>
> On Tue, Aug 16, 2016 at 1:32 PM, Andrey L. Neporada <
> anepor...@yandex-team.ru> wrote:
>
>> Hi!
>>
>> I would like
Hi, Jason!
> On 17 Aug 2016, at 21:53, Jason Gustafson wrote:
>
> Hi Andrey,
>
> Thanks for picking this up and apologies for the late comment.
>
> One thing worth mentioning is that the consumer actually sends multiple
> parallel fetch requests, one for each broker that
Hi!
I would like to initiate the voting process for KIP-74:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-74%3A+Add+Fetch+Response+Size+Limit+in+Bytes
Thanks,
Andrey.
Hi!
I would like to initiate the voting process for KIP-74:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-74%3A+Add+Fetch+Response+Size+Limit+in+Bytes
Thanks,
Andrey.
Hi!
> On 16 Aug 2016, at 20:28, Jun Rao wrote:
>
> Hi, Andrey,
>
> I was thinking of just doing 2 for the new fetch request for backward
> compatibility.
>
> It seems there are no more comments on this thread. So, we can probably
> start the voting thread once you update
Hi, Jun!
> On 16 Aug 2016, at 18:52, Jun Rao wrote:
>
> Hi, Andrey,
>
> For 2, we actually can know the next message size. In LogSegment.read(), we
> first use the offset index to find the file position close to the requested
> offset and then scan the log forward to find
Hi Jun!
Thanks for feedback.
> On 15 Aug 2016, at 20:04, Jun Rao wrote:
>
> Hi, Andrey,
>
> Thanks for the update to the wiki. Just a few more minor comments.
>
> 1. "If *response_max_bytes* parameter is zero ("no limit"), the request is
> processed *exactly* as before."
Hi all!
KIP-74 is updated to sync up with mail list discussion.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-74%3A+Add+Fetch+Response+Size+Limit+in+Bytes
Your feedback is highly appreciated.
Thanks,
Andrey.
> On 12 Aug 2016, at 20:49, Andrey L. Neporada <anepor...@yandex-t
l
response limit to implement some kind of round-robin to ensure fairness (if
they care about it)
> Thanks,
>
> Jun
>
Thanks,
Andrey.
> On Fri, Aug 12, 2016 at 3:56 AM, Andrey L. Neporada <
> anepor...@yandex-team.ru> wrote:
>
>> Hi!
>>
>>
Thanks!
Will do.
> On 12 Aug 2016, at 08:29, Ben Stopford wrote:
>
> Andrey
>
> To make progress, I suggest you keep the partition-level limit in, at least
> for now, and keep it on the FetchRequest too.
>
> B
>
Hi!
> On 12 Aug 2016, at 04:29, Jun Rao wrote:
>
> Hi, Andrey,
>
> One potential benefit of keeping the per partition limit is for Kafka
> stream. When reading messages from different partitions, KStream prefers to
> read from partitions with smaller timestamps first and
Hi, Jun!
Thanks for feedback!
> On 10 Aug 2016, at 17:42, Jun Rao <j...@confluent.io> wrote:
>
> Hi, Andrey,
>
> Thanks for the reply. A couple of more comments inline below.
>
> On Wed, Aug 10, 2016 at 3:56 AM, Andrey L. Neporada <
> anepor...@yandex-team.ru
Hi!
> On 09 Aug 2016, at 20:46, Jun Rao wrote:
>
> Hi, Andrey,
>
> Thanks for the proposal. It looks good overall. Some minor comments.
>
> 1. It seems that it's bit weird that fetch.partition.max.bytes is a broker
> level configuration while fetch.limit.bytes is a client
Hi all!
I’ve just created KIP-74: Add Fetch Response Size Limit in Bytes.
The idea is to limit client memory consumption when fetching many partitions
(especially useful for replication).
Full details are here:
Hi all!
I would like to get your feedback on PR for bug KAFKA-2063.
Looks like KIP is needed there, but it would be nice to get feedback first.
Thanks,
Andrey.
> On 22 Jul 2016, at 12:26, Andrey L. Neporada <anepor...@yandex-team.ru> wrote:
>
> Hi!
>
> Thanks f
4MB of free memory, so making it smaller some of the time doesn't
> really help you.
>
> -Jay
>
> On Thu, Jul 21, 2016 at 2:49 AM, Andrey L. Neporada <
> anepor...@yandex-team.ru> wrote:
>
>> Hi all!
>>
>> We noticed that our Kafka cluster uses a l
kaApis -> ReplicaManager -> Log -> LogSegment, then to
> FetchResponse and FetchResponseSend (in case you want some pointers to some
> code).
>
> I may be missing something here, but there seems to be a deeper issue here,
>
> Tom Crayford
> Heroku Kafka
>
> On Thu,
Hi all!
We noticed that our Kafka cluster uses a lot of memory for replication. Our
Kafka usage pattern is following:
1. Most messages are small (tens or hundreds kilobytes at most), but some
(rare) messages can be several megabytes.So, we have to set
replica.fetch.max.bytes =
24 matches
Mail list logo