smael
>
> On Wed, Nov 22, 2017 at 10:13 AM, Anish Mashankar <
> an...@systeminsights.com>
> wrote:
>
> > Thanks Ismael.
> > Just need a clarification on something because I observed getting errors
> > from the v0.9 and v0.10 consumer for invalid message format.
efficiency hit if the message format used for the topic is newer than the
> message format supported by the consumer.
>
> Also, we found a memory leak in 1.0.0, so I'd recommend you upgrade to
> 0.11.0.2 or wait until 1.0.1.
>
> Ismael
>
> On Tue, Nov 21, 2017 at 8:54 AM,
Hello Kafka users!
The first question that I have is related to the documentation. I see that
we no longer have to change the message format version when upgrading to
1.0. So, will all clients continue to work after performing the rolling
upgrade?
We are running Kafka v0.10.0.0. The Kafka
Hi Guozhang,
Thanks for the reply.
By taking a lot of time I meant that I see a log message `Restoring state
from changelog topics `, followed by just some kafka consumer logs like
`Discovered coordinator`. Looking at this I assumed that the Stream
threads are waiting for the states to be
First question: We know that Kafka Streams commits offsets on intervals.
But what offsets are committed? Are the offsets for messages committed are
the ones which have just arrived at the source node? Or the messages that
have been through the entire pipeline? If the latter, how do we avoid data
e stores and initialize Kafka Streams and
> the order of doing things. Could you please double check if it matches your
> code?
>
> Thanks
> Eno
>
>
>> On Aug 5, 2017, at 3:22 AM, Anish Mashankar <an...@systeminsights.com> wrote:
>>
>> Hello Eno,
>>
ed-to-rebalance-error-in-kafka-streams-with-more-than-one-topic-partition
> <
> https://stackoverflow.com/questions/42329387/failed-to-rebalance-error-in-kafka-streams-with-more-than-one-topic-partition
> >
>
> Thanks
> Eno
> > On Aug 4, 2017, at 12:48 PM, Anish Mashanka
On Fri, Aug 4, 2017 at 2:28 PM Eno Thereska <eno.there...@gmail.com> wrote:
> Hi Anish,
>
> Could you give more info on how you create the state stores in your code?
> Also could you copy-paste the exact error message from the log?
>
> Thanks
> Eno
I have a new application, call it streamsApp with state stores S1 and S2.
So, according to the documentation, upon the first time startup, the
application should've created the changelog topics streamsApp-S1-changelog
and streamsApp-S2-changelog. But I see that these topics are not created.
Also,
es (not sure if this is the case for you).
>
> -Matthias
>
> On 7/26/17 7:38 AM, Anish Mashankar wrote:
> > Hello All,
> > I have more than 100 topics in Kafka with one partition each. These 100
> > topics are configured through a regex. When running the application, I
>
Hello All,
I have more than 100 topics in Kafka with one partition each. These 100
topics are configured through a regex. When running the application, I
found that there is only one task that is being spawned as the default
partition grouper in Kafka spawns as many tasks as the maximum number of
After upgrading Kafka 0.10.0 to 0.11.0, and changing the Message protocol
to 0.11 on brokers, the consumers with version 0.8.2.1 started reporting
Invalid message error logs.
On Tue, Jul 18, 2017 at 6:37 PM Ismael Juma wrote:
> Hi all,
>
> 0.8.x clients should work with
Hello everyone,
We are running a 5 broker Kafka v0.10.0.0 cluster on AWS. Also, the connect
api is in v0.10.0.0.
It was observed that the distributed kafka connector went into infinite
loop of log message of
(Re-)joining group connect-connect-elasticsearch-indexer.
And after a little more
Try Presto https://prestodb.io. It may solve your problem.
On Sat, 4 Mar 2017, 03:18 Milind Vaidya, wrote:
> I have 6 broker kafka setup.
>
> I have retention period of 48 hrs.
>
> To debug if certain data has reached kafka or not I am using command line
> consumer to then
Yes. kafka.admin helps. You can create an application that resembles the
ConsumerGroupCommand.scala to get consumer offsets for both old and new
consumers.
On Thu, 29 Sep 2016, 8:17 p.m. Gourab Chowdhury,
wrote:
> Thanks for your suggestion, I had previously read about
I recently ran partition assignment on some topics. This made the replicas
of some partitions move around the cluster. It was seamless. However, when
it came to purging old logs following the retention.ms property of the
topic, the replica partitions were not clear. The leader partition,
however,
I am trying to create a centralized application in Java to track consumer
offsets. I followed the guide at
https://cwiki.apache.org/confluence/display/KAFKA/Committing+and+fetching+consumer+offsets+in+Kafka
and was able to get the correct current offset for a consumer group.
However, most of the
Two out of three of our Kafka nodes have become unrecoverable due to disk
corruption. I launched two new nodes, but they got new broker_id's.
For redistributing the topics across the cluster, I ran the command:
---
/opt/kafka/bin/kafka-reassign-partitions.sh --broker-list
"1003,1005,1006,1007"
I am running a distributed connector with a custom MetricReporter class.
The metric reporter is able to listen to kafka metrics and is logging them
on the console. However, values for all metrics are that are being reported
are either zero or infinity. The values do not change for a significant
19 matches
Mail list logo