Jie: Which DB are you using ? 600 records/second is very low rate.
Probably your DB needs some tuning. Cheers On Fri, Mar 2, 2018 at 9:32 AM, Guozhang Wang <wangg...@gmail.com> wrote: > Hello Jie, > > By default Kafka Streams uses caching on top of its internal state stores > to de-dup output streams to the final destination (in your case the DB) so > that for a single key, fewer updates will be generated giving a small > working set. If your aggregation logic follows such key distribution, you > can try enlarge the cache size (by default it is only 50MB) and see if it > helps reduce the downstream traffic to your DB. > > > Guozhang > > > On Thu, Mar 1, 2018 at 6:33 PM, 杰 杨 <funk...@live.com> wrote: > > > Yes .but the DB’s Concurrent quantity is the limitation. > > Now I can process 600 records/second > > And I want enhance it > > > > 发送自 Windows 10 版邮件<https://go.microsoft.com/fwlink/?LinkId=550986>应用 > > > > 发件人: Guozhang Wang<mailto:wangg...@gmail.com> > > 发送时间: 2018年3月2日 2:59 > > 收件人: email@example.com<mailto:firstname.lastname@example.org> > > 主题: Re: which Kafka StateStore could I use ? > > > > Hello Jie, > > > > Just to understand your problem better, are you referring "db" for an > > external storage engine outside Kafka Streams, and you are asking how to > > only send one record per aggregation key (assuming you are doing some > > aggregations with Streams' statestore) to that end storage engine? > > > > > > Guozhang > > > > > > On Wed, Feb 28, 2018 at 7:53 PM, 杰 杨 <funk...@live.com> wrote: > > > > > > > > HI： > > > I use kafka streams for real-time data analysis > > > and I meet a problem. > > > now I process a record in kafka and compute it and send to db. > > > but db concurrency level is not suit for me. > > > so I want that > > > 1）when there is not data in kakfa ,the statestore is no results. > > > 2) when there is a lot of data records in kafka the statestore save > > > computed result and I need send its once to db. > > > which StateStoe can I use for do that above > > > ________________________________ > > > funk...@live.com > > > > > > > > > > > -- > > -- Guozhang > > > > > > > -- > -- Guozhang >