define the concept of individual ack
>>> which means we could skip records and leave certain records remain on the
>>> queue for late processing. This should be something similar to KIP-408
>>> which also shares some motivations for us to invest.
>>>
>>>
gt;
>> ________________
>> From: Richard Yu
>> Sent: Friday, January 4, 2019 5:42 AM
>> To: dev@kafka.apache.org
>> Subject: Re: [DISCUSS] KIP-408: Add Asynchronous Processing to Kafka
>> Streams
>>
>> Hi all,
>>
>> Just bumping
nuary 4, 2019 5:42 AM
> To: dev@kafka.apache.org
> Subject: Re: [DISCUSS] KIP-408: Add Asynchronous Processing to Kafka
> Streams
>
> Hi all,
>
> Just bumping this KIP. Would be great if we got some discussion.
>
>
> On Sun, Dec 30, 2018 at 5:13 PM Richard Yu
> wrote:
>
&
>>>> processed, we need an RPC to external storage that takes non-trivial time,
>>>>> and before its finishing all 499 records before it shouldn't be visible to
>>>>> the end user. In such case, we need to have fine-grained control on the
>>>>> visibility
t;>> processed, we need an RPC to external storage that takes non-trivial time,
>>>>> and before its finishing all 499 records before it shouldn't be visible to
>>>>> the end user. In such case, we need to have fine-grained control on the
>>>>> visibility of downstream consumer so that our as
visibility of downstream consumer so that our async task is planting a
>>>> barrier while still make 499 records non-blocking process and send to
>>>> downstream. So eventually when the heavy RPC is done, we commit this record
>>>> to remove the barrier and make all 500 records a
we commit this record
>>> to remove the barrier and make all 500 records available for downstream. So
>>> here we still need to guarantee the ordering within 500 records, while in
>>> the same time consumer semantic has nothing to change.
>>>
>>> Am I making
00 records available for downstream. So
>>> here we still need to guarantee the ordering within 500 records, while in
>>> the same time consumer semantic has nothing to change.
>>>
>>> Am I making the point clear here? Just want have more discussion on the
&
has nothing to change.
>>
>> Am I making the point clear here? Just want have more discussion on the
>> ordering guarantee since I feel it wouldn't be a good idea to break
>> consumer ordering guarantee by default.
>>
>> Best,
>> Boyang
>>
>>
ak
> consumer ordering guarantee by default.
>
> Best,
> Boyang
>
>
> From: Richard Yu
> Sent: Saturday, December 22, 2018 9:08 AM
> To: dev@kafka.apache.org
> Subject: Re: KIP-408: Add Asynchronous Processing to Kafka Streams
>
> Hi Boyang,
>
> Thanks fo
ordering guarantee by default.
Best,
Boyang
From: Richard Yu
Sent: Saturday, December 22, 2018 9:08 AM
To: dev@kafka.apache.org
Subject: Re: KIP-408: Add Asynchronous Processing to Kafka Streams
Hi Boyang,
Thanks for pointing out the possibility of skipping
2, 2018 2:00 AM
> To: dev@kafka.apache.org
> Subject: KIP-408: Add Asynchronous Processing to Kafka Streams
>
> Hi all,
>
> Lately, there has been considerable interest in adding asynchronous
> processing to Kafka Streams.
> Here is the KIP for such an addition:
>
>
> https:
8: Add Asynchronous Processing to Kafka Streams
Hi all,
Lately, there has been considerable interest in adding asynchronous
processing to Kafka Streams.
Here is the KIP for such an addition:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-408%3A+Add+Asynchronous+Processing+To+Kafka+Streams
Hi all,
Lately, there has been considerable interest in adding asynchronous
processing to Kafka Streams.
Here is the KIP for such an addition:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-408%3A+Add+Asynchronous+Processing+To+Kafka+Streams
I wish to discuss the best ways to approach
14 matches
Mail list logo