Hi all,
I have a difficulty to represent an example of the calculation of the following
formula.
Based on throughput requirements one can pick a rough number of partitions.
1. Lets call the throughput from producer to a single partition is P
2. Throughput from a single partition to a co
Thanks Jun
On Fri, 2 Mar 2018 at 02:25 Jun Rao wrote:
> KAFKA-6111 is now merged to 1.1 branch.
>
> Thanks,
>
> Jun
>
> On Thu, Mar 1, 2018 at 2:50 PM, Jun Rao wrote:
>
>> Hi, Damian,
>>
>> It would also be useful to include KAFKA-6111, which prevents
>> deleteLogDirEventNotifications
>> path
I am new to Kafka but I think I have a good use case for it. I am trying
to build daily counts of requests based on a number of different attributes
in a high throughput system (~1 million requests/sec. across all 8
servers). The different attributes are unbounded in terms of values, and
some wi
Hello Jie,
By default Kafka Streams uses caching on top of its internal state stores
to de-dup output streams to the final destination (in your case the DB) so
that for a single key, fewer updates will be generated giving a small
working set. If your aggregation logic follows such key distribution
Jie:
Which DB are you using ?
600 records/second is very low rate.
Probably your DB needs some tuning.
Cheers
On Fri, Mar 2, 2018 at 9:32 AM, Guozhang Wang wrote:
> Hello Jie,
>
> By default Kafka Streams uses caching on top of its internal state stores
> to de-dup output streams to the final
mongodb i used,but i need update 10 operator for one record
I process a record for 20 ms for one thread
---原始邮件---
发件人: "Ted Yu "
发送时间: 2018年3月3日 01:37:13
收件人: "users";
主题: Re: 答复: which Kafka StateStore could I use ?
Jie:
Which DB are you using ?
600 records/second is very low rate.
Probably
can you show some tips for this?
---原始邮件---
发件人: "Guozhang Wang "
发送时间: 2018年3月3日 01:32:55
收件人: "users";
主题: Re: 答复: which Kafka StateStore could I use ?
Hello Jie,
By default Kafka Streams uses caching on top of its internal state stores
to de-dup output streams to the final destination (in yo
Actually it looks like the better way would be to output the counts to a
new topic then ingest that topic into the DB itself. Is that the correct
way?
On Fri, Mar 2, 2018 at 9:24 AM, Matt Daum wrote:
> I am new to Kafka but I think I have a good use case for it. I am trying
> to build daily co
We are looking for a consultant or contractor that can come onsite to our
Ogden, Utah location in the US, to help with a Kafka set up and maintenance
project. What we need is someone with the knowledge and experience to build
out the Kafka environment from scratch.
We are thinking they would
try https://www.confluent.io/ - that's what they do
/svante
2018-03-02 21:21 GMT+01:00 Matt Stone :
> We are looking for a consultant or contractor that can come onsite to our
> Ogden, Utah location in the US, to help with a Kafka set up and maintenance
> project. What we need is someone with t
HiApache Kafka users email distribution list.
I'm trying to post to connect-offsets topic a message with a lsn from the
past. I dump the connect offsets topic with the following command:
./kafka-console-consumer.sh --bootstrap-server
--consumer.config ../config/consumer.properties --property pri
Thank you I will look into that.
-Original Message-
From: Svante Karlsson [mailto:svante.karls...@csi.se]
Sent: Friday, March 2, 2018 1:50 PM
To: users@kafka.apache.org
Subject: Re: Consultant Help
try https://www.confluent.io/ - that's what they do
/svante
2018-03-02 21:21 GMT+01:00
Hey, Srinivasa. It sounds like you’re running an intermediate version of
the master branch (I remember that specific error as I was making some
changes). It should be resolved with the latest version of master. Can you
try pulling the latest master?
We’ll be cutting a new release version soon, as
13 matches
Mail list logo