Thanks Damian, I worked. I changed the StreamsConfig.WINDOW_STORE_
CHANGE_LOG_ADDITIONAL_RETENTION_MS_CONFIG and was able to reduce the
retention time of the changelog topic.
-Sameer.
On Mon, Oct 30, 2017 at 9:38 PM, Damian Guy wrote:
> The retention for the joins is as specified above. With
hi,all:
on my cluster , some topic Under-replicated 100% , I have 7 broker and
the topic have 7 partition ,but the topic now only use 5 broker as leader ,
how can I chang it to use all the broker as leader ?
2017-11-01
lk_kafka
Hi ,
pls add to the user group.
Thanks ,
Karthigeyan
Dear ,
I just want to know if KAFKA is supported on IBM System z Linux or z/OS .
Regards,
Hello,
I am a new kafka user, trying to reach kafka limits :1 000 000 messages per
second.
Therefore, I loaded three Azure VMs having this configuration:
16 CPU Cores
56 Go Ram
All disks are premium LRS and minimum capacity 512 Go.
Download speed 2696.28 Mbit/s
Upload speed 1121.13 Mbi
Hi,
Would like to understand the purpose of ZkUtils.getAllPartitions, as when i
try to use the method, i end up getting wrong number of partitions assigned
to Topics, i am not really sure if my understanding is wrong about this
method.
i have assumed this method would return the partition count.
B
How about upgrading to 0.10.1.1 or higher, as suggested by Ismael ?
On Tue, Oct 31, 2017 at 3:42 AM, Yuanjia wrote:
> Hi Ted,
> It doesn't look like the same issue.
> In my case, the node 6 doesn't shrink for the partitions it owns the
> ISR's down to itself, and all clients work well.
>
ohhh...thank you. Its cleared now
On Tue, Oct 31, 2017 at 4:36 PM, Damian Guy wrote:
> Hi, the `map` when it is followed by `groupByKey` will cause a
> repartitioning of the data, so you will have your 5 tasks processing the
> input partitions and 5 tasks processing the partitions from the
> rep
Hi, the `map` when it is followed by `groupByKey` will cause a
repartitioning of the data, so you will have your 5 tasks processing the
input partitions and 5 tasks processing the partitions from the
repartitioning.
On Tue, 31 Oct 2017 at 10:56 pravin kumar wrote:
> I have created a stream with
I have created a stream with topic contains 5 partitions and expected to
create 5 stream tasks ,i got 10 tasks as
0_0 0_1 0_2 0_3 0_4 1_0 1_1 1_2 1_3 1_4
im doing wordcount in this example,
here is my topology in this link: 1.
https://gist.github.com/Pk007790/72b0718f26e6963246e83da992
Hi Ted,
It doesn't look like the same issue.
In my case, the node 6 doesn't shrink for the partitions it owns the ISR's
down to itself, and all clients work well.
I notice KAFKA-5153, it maybe same to mine. But it doesn't update for a
long time.
From: Ted Yu
Date: 2017-10-30 17
11 matches
Mail list logo