Hi,
It seems that consumer group rebalance is broken in Kafka 0.10.1.0 ?
When running a small test-project :
- consumers running in own JVM (with different 'client.id')
- producer running in own JVM
- kafka broker : the embedded kafka : KafkaServerStartable
It looks like the consumers loose
gt; Food for thought.
>
> –
> Best regards,
> Radek Gruchalski
> ra...@gruchalski.com
>
>
> On November 28, 2016 at 9:04:16 PM, Bart Vercammen (b...@cloutrix.com)
> wrote:
>
> Hi,
>
> It seems that consumer group rebalance is broken in Kafka 0.10.1.0 ?
> Wh
is a nice work-around, but still leaves me
with my initial remark that it would be useful
to somehow be able to alter the subscriptions in a running streams
application.
Bart
On Tue, Aug 15, 2017 at 1:45 PM, Bart Vercammen <b...@cloutrix.com> wrote:
> HI Guozhang,
>
> Thank
r 2) shutdown an existing program piping from a topic. This will
> admittedly introduce a duplicate topic containing the aggregated data, but
> operational-wise may still be simpler.
>
>
> Guozhang
>
>
> On Mon, Aug 14, 2017 at 2:47 AM, Bart Vercammen <b...@cloutrix.com
Hi,
I have a question basically on how it would be the best way to implement
something within Kafka Streams. The thing I would like to do: "dynamically
update the subscription pattern of the source topics.
The reasoning behind this (in my project):
meta data about the source topics is evented
mian
>
> On Tue, 8 Aug 2017 at 12:09 Bart Vercammen <b...@cloutrix.com> wrote:
>
> > That's RocksDB .. I'm using in-memory stores ...
> > here:
> >
> > https://github.com/apache/kafka/blob/0.11.0/streams/src/
> main/java/org/apache/kafka/streams/state/inter
Hi,
I recently moved some KafkaStreams applications from v0.10.2.1 to v1.1.1
and now I notice a weird behaviour in the partition assignment.
When starting 4 instances of my Kafka Streams application (on v1.1.1) I see
that 17 of the 20 partitions (of a source topic) are assigned to 1 instance
of
urprising. Is there any chance you can zip up some logs
> so we can see the assignment protocol on the nodes?
>
> Thanks,
> -John
>
> On Mon, Oct 8, 2018 at 4:32 AM Bart Vercammen wrote:
>
> > Hi,
> >
> > I recently moved some KafkaStreams applications from
; If the repro doesn't turn out, maybe you could just extract the assignment
> lines from your logs?
>
> Thanks,
> -John
>
> On Mon, Oct 8, 2018 at 1:24 PM Bart Vercammen wrote:
>
> > Hi John,
> >
> > Zipping up some logs from our running Kafka cluster is goin
Hi,
I found a mismatch between the documentation in
the org.apache.kafka.common.serialization.Deserializer and the
implementation in KafkaConsumer.
Deserializer documentation sais: *"serialized bytes; may be null;
implementations are recommended to handle null by returning a value or null
rather
a known issue discovered in version 1.1 -
> > > https://issues.apache.org/jira/browse/KAFKA-7144
> > >
> > > This issue has been fixed in Kafka Streams 2.0, any chance you can
> > upgrade
> > > to 2.0?
> > >
> > > Thanks,
> > > Bill
>
s,
> Bill
>
> On Mon, Oct 8, 2018 at 2:46 PM Bart Vercammen wrote:
>
> > Thanks John,
> >
> > I'll see what I can do regarding the logs ...
> > As a side not, our Kafka cluster is running version v1.1.1 in v0.10.2.1
> log
> > format configuration (due to an
12 matches
Mail list logo