You should configure zookeeper quorum for zookeeper high availability,
basically you will have multiple zookeeper services running on multiple
nodes when one goes down other takes over.
Thanks,
Prahalad
On Fri, Jan 13, 2017 at 9:56 AM, Laxmi Narayan NIT DGP wrote:
> Hi,
Hi,
I have understood that zookeeper is responsible for runing kafka in cluster
mode.
But how do we ensure that zookeeper never goes down?
**
*Regards,*
*Laxmi Narayan Patel*
*MCA NIT Durgapur (2011-2014)*
*Mob:-9741292048,8345847473*
Hi Stephen
Out of curiosity, why did you pick ZFS over XFS or ext4 and what options
are you using when formatting and mounting?
Regards,
Stephane
On 13 January 2017 at 6:40:18 am, Stephen Powis (spo...@salesforce.com)
wrote:
Running Centos 6.7 3.10.95-1.el6.elrepo.x86_64. 4 SATA disks in
Can anyone shed some light on this?
On Wed, Jan 11, 2017 at 2:59 PM, Check Peck wrote:
> I am trying to run kafka performance script on my linux box. Whenever I
> run "kafka-consumer-perf-test.sh", I always get an error. In the same box,
> I am running
Hi,
long long technical story, sorry for that.
I'm dealing with a special case. My input topic receives records containing
an id in the key (and another field for partitioning), and a version number
in the value, amongst other metrics. Records with the same id are sent
every 5 seconds, and the
Thanks Eno !
My intention is to reprocess all the data from the beginning. And we'll
reset the application as documented in the Confluent blog.
We don't want to keep the previous results; in fact, we want to overwrite
them. Kafka Connect will happily replace all records in our sink database.
So
-Original Message-
From: Gerard Klijs [mailto:gerard.kl...@dizzit.com]
Sent: Wednesday, May 11, 2016 3:00 AM
To: users@kafka.apache.org
Subject: Re: Backing up Kafka data and using it later?
You could create a docker image with a kafka installation, and start a mirror
maker in it, you
Running Centos 6.7 3.10.95-1.el6.elrepo.x86_64. 4 SATA disks in RAID10
with ZFS
On Thu, Jan 12, 2017 at 2:27 PM, Tauzell, Dave wrote:
> You have a local filesystem? Linux?
>
> -Dave
>
> -Original Message-
> From: Stephen Powis
Hello, we ran into a memory issue on a Kafka 0.10.0.1 broker we are running
that required a system restart. Since bringing Kafka back up it seems the
consumers are having issues finding their coordinators. Here are some errors
we’ve seen in our server logs after restarting:
[2017-01-12
You have a local filesystem? Linux?
-Dave
-Original Message-
From: Stephen Powis [mailto:spo...@salesforce.com]
Sent: Thursday, January 12, 2017 1:22 PM
To: users@kafka.apache.org
Subject: Re: Taking a long time to roll a new log segment (~1 min)
I've further narrowed it down to this
I've further narrowed it down to this particular line:
https://github.com/apache/kafka/blob/0.10.0/core/src/main/scala/kafka/log/OffsetIndex.scala#L294
But I'm still at a loss to why this would be slow sometimes and not others.
On Thu, Jan 12, 2017 at 10:56 AM, Stephen Powis
You can set the retention for the topic to a small time and then wait for Kafka
to delete the messages before setting it back:
bin/kafka-topics.sh --zookeeper zk.prod.yoursite.com --alter --topic TOPIC_NAME
--config retention.ms=1000
-Original Message-
From: Laxmi Narayan NIT DGP
users@kafka.apache.org;users-unsubscr...@kafka.apache.org;
users_unsubscr...@kafka.apache.org;
d...@kafka.apache.org; dev-unsubscr...@kafka.apache.org;
dev_unsubscr...@kafka.apache.org
-Original Message-
From: Raj Tanneru [mailto:rtann...@fanatics.com]
Sent: Saturday, May 7, 2016 6:46 PM
Hi ,
If my topic is not enabled for deletion any other way to purge message from
topic ?
*Regards,*
*Laxmi Narayan Patel*
*MCA NIT Durgapur (2011-2014)*
*Mob:-9741292048,8345847473*
On Fri, Jan 13, 2017 at 12:16 AM, Kaufman Ng wrote:
> Your zookeeper url doesn't
Your zookeeper url doesn't look right. Port 9092 is kafka broker's default
listening port. Zookeeper's default is 2181.
On Thu, Jan 12, 2017 at 1:33 PM, Laxmi Narayan NIT DGP wrote:
> /bin/kafka-topics.sh --zookeeper localhost:9092 --delete --topic topicName
>
>
> I am
/bin/kafka-topics.sh --zookeeper localhost:9092 --delete --topic topicName
I am getting exception saying :
[2017-01-13 00:01:45,101] WARN Client session timed out, have not heard
from server in 15016ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)
[2017-01-13 00:02:00,902] WARN Client
Hey Grant - congrats!
On Thu, Jan 12, 2017 at 10:00 AM, Neha Narkhede wrote:
> Congratulations, Grant. Well deserved!
>
> On Thu, Jan 12, 2017 at 7:51 AM Grant Henke wrote:
>
> > Thanks everyone!
> >
> > On Thu, Jan 12, 2017 at 2:58 AM, Damian Guy
-Original Message-
From: Kuldeep Kamboj [mailto:kuldeep.kam...@osscube.com]
Sent: Monday, May 2, 2016 11:29 PM
To: users@kafka.apache.org
Subject: Getting Timed out reading socket error for kafka cluster setup
Hi,
I want to setup a kafka cluster type setup for three similar application
Congratulations, Grant. Well deserved!
On Thu, Jan 12, 2017 at 7:51 AM Grant Henke wrote:
> Thanks everyone!
>
> On Thu, Jan 12, 2017 at 2:58 AM, Damian Guy wrote:
>
> > Congratulations!
> >
> > On Thu, 12 Jan 2017 at 03:35 Jun Rao
Hi Nicolas,
I've seen your previous message thread too. I think your best bet for now is to
increase the window duration time, to 6 months.
If you change your application logic, e.g., by changing the duration time, the
semantics of the change wouldn't immediate be clear and it's worth
Using the little bash script in that JIRA ticket to go thru the GC log and
sum up the total pause times I come up with the following. I don't see
anything that would indicate a ~28 second pause.
2017-01-12T07:00 = 72.2961
> 2017-01-12T07:01 = 59.8112
> 2017-01-12T07:02 = 69.6915
>
Thanks everyone!
On Thu, Jan 12, 2017 at 2:58 AM, Damian Guy wrote:
> Congratulations!
>
> On Thu, 12 Jan 2017 at 03:35 Jun Rao wrote:
>
> > Grant,
> >
> > Thanks for all your contribution! Congratulations!
> >
> > Jun
> >
> > On Wed, Jan 11, 2017 at
Just realized that GCEasy doesn't keep reports around for very long
anymore, here is a screencap of the report: http://imgur.com/a/MEubD
The longest reported GC pause was 30ms, tho they happen somewhat frequently
at an average of once per 12 seconds. KAFKA-4616 certainly sounds just
like my
You may run into this bug https://issues.apache.org/jira/browse/KAFKA-4614
On Thu, 12 Jan 2017 at 23:38 Stephen Powis wrote:
> Per my email to the list in Sept, when I reviewed GC logs then, I didn't
> see anything out of the ordinary. (
>
>
Thanks, works well
For someone searching for this..
Example:
props.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
"org.apache.kafka.clients.consumer.RoundRobinAssignor");
> On 12 Jan 2017, at 11:37 PM, tao xiao wrote:
>
> The default partition assignor is
The default partition assignor is range assignor which assigns works on a
per-topic basis. If you topics with one partition only they will be
assigned to the same consumer. You can change the assignor to
org.apache.kafka.clients.consumer.RoundRobinAssignor
On Thu, 12 Jan 2017 at 22:33 Tobias
Per my email to the list in Sept, when I reviewed GC logs then, I didn't
see anything out of the ordinary. (
http://mail-archives.apache.org/mod_mbox/kafka-users/201609.mbox/%3CCABQB-gS7h4Nuq3TKgHoAVeRHPWnBNs2B0Tz0kCjmdB9c0SDcLQ%40mail.gmail.com%3E
)
Reviewing the GC logs from this morning around
Can you collect garbage collection stats and verify there isn't a long GC
happening at the same time?
-Dave
-Original Message-
From: Stephen Powis [mailto:spo...@salesforce.com]
Sent: Thursday, January 12, 2017 8:34 AM
To: users@kafka.apache.org
Subject: Re: Taking a long time to roll a
So per the kafka docs I up'd our FD limit to 100k, and we are no longer
seeing the process die, which is good.
Unfortunately we're still seeing very high log segment roll times, and I'm
unsure if this is considered 'normal', as it tends to block producers
during this period.
We are running kafka
Hi
We have a scenario where we have a lot of single partition topics for ordering
purposes.
We then want to use multiple consumer processes listening to many topics.
During testing it seems like one consumer process will always end up with all
topics/partitions assigned to it and there is no
Thanks for your response Damian.
> However the in-memory store will write each update to the changelog
> (regardless of context.commit), which seems to be the issue you have?
Yes. I have the issue.
Although I can't say a specific number, it is issue for me, for example,
that Kafka Streams reads
Producers were publishing data for the topic. And consumers were also
connected, sending heartbeat pings every 100 ms.
On Thu, 12 Jan 2017 at 17:15 Michael Freeman wrote:
> If the topic has not seen traffic for a while then Kafka will remove the
> stored offset. When
If the topic has not seen traffic for a while then Kafka will remove the stored
offset. When your consumer reconnects Kafka no longer has the offset so it will
reprocess from earliest.
Michael
> On 12 Jan 2017, at 11:13, Mahendra Kariya wrote:
>
> Hey All,
>
> We
Hey All,
We have a Kafka cluster hosted on Google Cloud. There was some network
issue on the cloud and suddenly, the offset for a particular consumer group
got reset to earliest and all of a sudden the lag was in millions. We
aren't able to figure out what went wrong. Has anybody faced the
Hi.
I'd like to re-consume 6 months old data with Kafka Streams.
My current topology can't because it defines aggregations with windows maintain
durations of 3 days.
TimeWindows.of(ONE_HOUR_MILLIS).until(THREE_DAYS_MILLIS)
As discovered (and shared [1]) a few months ago, consuming a record
Congratulations!
On Thu, 12 Jan 2017 at 03:35 Jun Rao wrote:
> Grant,
>
> Thanks for all your contribution! Congratulations!
>
> Jun
>
> On Wed, Jan 11, 2017 at 2:51 PM, Gwen Shapira wrote:
>
> > The PMC for Apache Kafka has invited Grant Henke to join as
36 matches
Mail list logo