The broker still has a direct dependency on log4j 1.x. There is a PR in
progress to remove this direct dependency.
Ismael
On Thu, Nov 9, 2017 at 10:17 PM, Arunkumar
wrote:
> Hi All
> We have a requirement to migrate log4J 1.x to log4j 2 for our kafka
> brokers
We're considering an architecture that would result in 5K-10K consumer
groups consuming from a single topic that has one partition.
What are the reasonable limits for the max number of consumer groups per
partition and per broker?
Can a single broker be the group coordinator for 1K+ consumer
I was thinking about controlled stream use case, where one stream is data for
processing, while the second one controls execution.
If I want to scale this, I want to run multiple instances. In this case I want
these instances to share data topic, but control topic should be delivered to
all
Hi Bill,
I was on 0.10.2.1.
Thank you
Rainer
Sent: Monday, November 13, 2017 at 4:02 PM
From: "Bill Bejeck"
To: users@kafka.apache.org
Subject: Re: Kafka Streams - Custom processor "init" method called before state
store has data restored into it
Rainer,
Thanks for the
Rainer,
Thanks for the info.
What version were you using before upgrading to 1.0.0?
Thanks,
Bill
On Sun, Nov 12, 2017 at 7:06 PM, Rainer Guessner wrote:
> Hi Bill,
>
> thanks for the suggestion towards StateRestoreListener, however that does
> not solve my issue as its a
Boris,
What's your use case scenarios that you'd prefer to set different
subscriber IDs for different streams?
Guozhang
On Mon, Nov 13, 2017 at 6:49 AM, Boris Lublinsky <
boris.lublin...@lightbend.com> wrote:
> This seems like a very limiting implementation
>
>
> Boris Lublinsky
> FDP
If you want to plug in a store into a DSL operator like aggregate, it
must be a key-value store as the aggregation is key-based (similar it
must be a windowed-store or session-store for window/session aggregation).
Not sure what code you wrote in 0.11 -- there, is must also be
Is there an example code for this somewhere?
Also does it have to be key/value.
In my case a store is just a state, so key is not exposed.
It was working fine in the 11.0, but now semantics is very different
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
>
You can plug in a custom store via `Materialized` parameter that allows
to specify a custom `KeyValueBytesStoreSupplier` (and others)
-Matthias
On 11/13/17 10:26 AM, Boris Lublinsky wrote:
>
>> On Nov 13, 2017, at 12:24 PM, Boris Lublinsky
>> wrote:
>>
>> It
> On Nov 13, 2017, at 12:24 PM, Boris Lublinsky
> wrote:
>
> It looks like for the custom state store implementation the only option is to
> use Topology APIs.
> The problem is that in the case of DSL, Kafka streams does not provide any
> option to create Store
It looks like for the custom state store implementation the only option is to
use Topology APIs.
The problem is that in the case of DSL, Kafka streams does not provide any
option to create Store Builder for a custom store.
Am I missing something?
Boris Lublinsky
FDP Architect
I have updated my queryable state example, based on
https://github.com/confluentinc/examples/tree/3.2.x/kafka-streams/src/main/java/io/confluent/examples/streams/interactivequeries
This seems like a very limiting implementation
Boris Lublinsky
FDP Architect
boris.lublin...@lightbend.com
https://www.lightbend.com/
> On Nov 13, 2017, at 4:21 AM, Damian Guy wrote:
>
> Hi,
>
> The configurations apply to all streams consumed within the same streams
>
Hi Ted,
Sorry for the late response I forgot to subscribe to the Mailing list…
I read that topic but it’s seems that they loose events only while upgrading
their cluster. However in my case my cluster is upgraded however I’m stuck with
this format version because of one of our legacy
Hi Damian,
in the blog it is mentioned that, although cumbersome, there is another way
of achievieng the same result.
Something like this.
KTable table1 =
builder.stream("topic1").groupByKey().aggregate(initializer1,
aggregator1, aggValueSerde1, storeName1);
KTable table2 =
Hi,
This KIP didn't make it into 1.0, so it can't be done at the moment.
Thanks,
Damian
On Mon, 13 Nov 2017 at 14:00 Artur Mrozowski wrote:
> Hi,
> I wonder if anyone could shed some light on how to implement CoGroup in
> Kafka Streams in currrent version 1.0, as mentioned
Hi,
I wonder if anyone could shed some light on how to implement CoGroup in
Kafka Streams in currrent version 1.0, as mentioned in this blog post
https://cwiki.apache.org/confluence/display/KAFKA/KIP-150+-+Kafka-Streams+Cogroup
.
I am new to Kafka and would appreciate if anyone could provide an
17 matches
Mail list logo