The second edition is not complete yet. The chapters that have been
released as part of the early release are updated, and you can use those
instead of the chapters in the first edition.
So use both for now :)
-Todd
On Mon, Apr 12, 2021 at 10:24 AM SuarezMiguelC
wrote:
> Hello Apache Ka
As a note, that part of the second edition has not been updated yet. This
setting used to cause significant problems, but more recent updates to the
controller code have made the auto leader rebalancing usable.
-Todd
On Mon, Apr 12, 2021 at 10:20 AM Liam Clarke-Hutchinson <
liam.
probably don't need that.
-Todd
On Sat, Oct 26, 2019, 1:19 PM Edward Capriolo wrote:
> On Saturday, October 26, 2019, M. Manna wrote:
>
> > You should also check out Becket Qin’s presentation on producer
> performance
> > tuning on YouTube. Both these items should
short period of
time when you expand.
-Todd
On Tue, Jan 8, 2019 at 11:11 AM aruna ramachandran
wrote:
> I need to process single sensor messages in serial (order of messages
> should not be changed)at the same time I have to process 1 sensors
> messages in parallel please h
I think you’ll need to expand a little more here and explain what you mean
by processing them in parallel. Nearly by definition, parallelization and
strict ordering are mutually exclusive concepts.
-Todd
On Tue, Jan 8, 2019 at 10:40 AM aruna ramachandran
wrote:
> I need to process the 10
From what I've read, a Ktable directly sourced from a compacted topic is smart
enough to not use a change log in the background. I must be doing something
wrong though as I have a setup similar to below and I can see on the broker a
topic named something like myappid-myStore-changelog is actual
there have been a number of
changes and updates that are useful. In the meantime, you may find more
help if you post to our Gitter:
https://gitter.im/linkedin-Burrow/Lobby
-Todd
On Thu, Mar 1, 2018 at 11:58 PM, Srinivasa Balaji <
srinivasa_bal...@trimble.com> wrote:
> We are runni
Not recommended. You’ll have timeout issues with the size of the controller
requests. Additionally, there appear to be problems with writing some nodes
in Zookeeper at high partition counts.
-Todd
On Thu, Dec 14, 2017 at 8:58 AM, Abhimanyu Nagrath <
abhimanyunagr...@gmail.com> wrote:
&
the Github issues, or PRs for contributing!
-Todd
--
*Todd Palino*
Senior Staff Engineer, Site Reliability
Data Infrastructure Streaming
linkedin.com/in/toddpalino
You can do this using the kafka-reassign-partitions tool (or using a 3rd
party tool like kafka-assigner in github.com/linkedin/kafka-tools) to
explicitly assign the partitions to an extra replica, or remove a replica.
-Todd
On Tue, Sep 19, 2017 at 3:45 PM, Devendar Rao
wrote:
> Is it possi
run with one in flight batch in order to maintain message ordering.
-Todd
On Thu, Sep 14, 2017 at 9:46 PM, Vu Nguyen wrote:
> Many of the descriptions and diagrams online describe deploying Kafka
> MirrorMaker into the target data center (near the target Kafka cluster).
> Since Mirro
is your friend.
-Todd
On Mon, Aug 7, 2017 at 11:06 AM, Gabriel Machado
wrote:
> Thanks Todd, i will set swapiness to 1.
>
> Theses machines will be the future production cluster for our main
> datacenter . We have 2 remote datacenters.
> Kafka will bufferize logs and elasticse
ing a heap size of 6 GB right
now. You want to leave memory open for the OS to use for buffers and cache
in order to get better performance from consumers. You can see from that
output that it's trying to.
It really looks like you're just overloading your system. In which case
swapping
We haven’t had any problem after tuning the default send/receive buffers in
the OS up to 10MB. Linux uses a sliding window, so if you have short
latencies, you won’t use as much of the buffer and you should see very
little, if any, impact.
-Todd
On Mon, Jul 24, 2017 at 2:20 PM, James Cheng
called receive.buffer.bytes. Again, you can set this to -1 to use
the OS configuration. Make sure to restart the applications after making
all these changes, of course.
-Todd
On Sat, Jul 22, 2017 at 1:27 AM, James Cheng wrote:
> Becket Qin from LinkedIn spoke at a meetup about how to tune
critical metrics via JMX and pushes them into a separate
system (that doesn’t use Kafka). ELK is used for log analysis for other
applications.
Kafka-monitor is what we built/use for synthetic traffic monitoring for
availability. And Burrow for monitoring consumers.
-Todd
On Tue, Jun 20, 2017 at 9
You can look at enabling JMX on kafka (
https://stackoverflow.com/questions/36708384/enable-jmx-on-kafka-brokers) using
JMXTrans (https://github.com/jmxtrans/jmxtrans) and a config (
https://github.com/wikimedia/puppet-kafka/blob/master/kafka-jmxtrans.json.md)
to gather stats, and insert them into
provides a number of
different ways to rebalance traffic.
There are other tools available for doing this, but right now it requires
something external to the Apache Kafka core.
-Todd
On Tue, Jun 13, 2017 at 5:30 PM, karan alang wrote:
> Hi All,
>
> Fpr Re-balancing Kafka partitions, w
with for thousands of
topics and tens of thousands of partitions over many consumers, and no good
way to define thresholds.
-Todd
On Mon, May 29, 2017 at 3:51 PM, Ian Duffy wrote:
> Hey Abhimanyu,
>
> Not directly answering your questions but in the past we used burrow at my
> cur
more automated
fashion.
-Todd
On Wed, Apr 26, 2017 at 12:25 PM Naanu Bora wrote:
> Hi,
>In our team some developers created topics with replication factor as 1
> by mistake and number of partition in the range of 20-40. How to increase
> the replication factor to 3 for those to
't seem to find how to modify this behavior in the consumer properties
documentation. Is it possible, and if so, what settings do I tweak?
Thanks,
Todd
that will be.
A better question here is why do you want to move the controller?
-Todd
On Wed, Apr 5, 2017 at 9:09 AM, Jun MA wrote:
> Hi,
>
> We are running kafka 0.9.0.1 and I’m looking for a elegant way to failover
> controller to other brokers. Right now I have to restart t
They are defined at the broker level as a default for all topics that do
not have an override for those configs. Both (and many other configs) can
be overridden for individual topics using the command line tools.
-Todd
On Wed, Mar 8, 2017 at 12:36 PM, Nicolas Motte wrote:
> Hi everyone
Nicholas, this appears to be a duplicate of your question from 2 days ago.
Please review that for discussion on this question.
-Todd
On Wed, Mar 8, 2017 at 1:08 PM, Tauzell, Dave
wrote:
> I think because the product batches messages which could be for different
> topics.
>
. But the
migration use case is different than that.
-Todd
On Mon, Mar 6, 2017 at 2:50 PM, Jack Foy wrote:
> Hey, all. Is there any general guidance around using mirrored topics
> in the context of a cluster migration?
>
> We're moving operations from one data center to another,
r.properties and acks=all on
> producer? min.insync.replicas only applies when acks=all.
>
> -James
>
> >
> > -Original Message-
> > From: Todd Palino [mailto:tpal...@gmail.com]
> > Sent: Monday, March 06, 2017 6:48 PM
> > To: users@kafka.apache.org
> > S
exists no
topic override for that configuration for that config.
-Todd
On Mon, Mar 6, 2017 at 4:38 PM, Shrikant Patel wrote:
> Hi All,
>
> Need details about min.insync.replicas in the server.properties.
>
> I thought once I add this to server.properties, all subsequent topic
>
new JRE (bringing with it who knows what
problems). Swapping an underlying OpenSSL version would be much more
palatable.
-Todd
On Mon, Mar 6, 2017 at 9:01 AM, Ismael Juma wrote:
> Even though OpenSSL is much faster than the Java 8 TLS implementation (I
> haven't tested against Java
;
> > If it is true, I don t get why the message has to be decoded by Kafka. I
> > would assume that whether the message is encrypted or not, Kafka simply
> > receives it, appends it to the file, and when a consumer wants to read
> it,
> > it simply reads at the right offse
about to start testing additional consumers over TLS, so
we’ll see what happens at that point. All I can suggest right now is that
you test in your environment and see what the impact is. Oh, and using
message keys (or not) won’t matter here.
-Todd
On Mon, Mar 6, 2017 at 5:38 AM, Nicolas Motte
should also stop all the mirror maker instances in a given consumer group
in parallel, as this will minimize the number of rebalances and how long it
takes for you to start passing messages again.
-Todd
On Tue, Feb 21, 2017 at 5:14 PM, Qian Zhu wrote:
> Hi,
>For now, I am doing “k
clusters with thousands of
partitions per broker, and tons of network connections from clients, I have
the FD limit set to 400k. Basically, you don’t want to run out, so you want
enough buffer to catch a problem (like a bug with socket file descriptors
not getting released properly).
-Todd
On Tue, Feb
__consumer_offsets is a log-compacted topic, and a NULL body indicates a
delete tombstone. So it means to delete the entry that matches the key
(group, topic, partition tuple).
-Todd
On Wed, Feb 22, 2017 at 3:50 PM, Jun MA wrote:
> Hi guys,
>
> I’m trying to consume from __consume
spindles). This is either more disks or more brokers, but at the end
of it you need to eliminate the disk IO bottleneck.
-Todd
On Tue, Feb 21, 2017 at 7:29 AM, Jon Yeargers
wrote:
> Running 3x 8core on google compute.
>
> Topic has 16 partitions (replication factor 2) and is consumed by
en I try to reassign with the config...
>
> {"version":1,"partitions":[{"topic":"foo","partition":2,"
> replicas":[1004,1001]}]}
>
> I see that it doesn't resolve.
>
> Status of partition reassignment:
> Reas
the servers. It’s worked for the last couple without a problem.
-Todd
On Tue, Dec 20, 2016 at 7:55 PM, Sanjeev T wrote:
> Hi,
>
> Can some of you share points on, the versions and handling leap second
> delay on Dec 31, 2016.
>
> Regards
> -Sanjeev
>
--
*Todd Palino
Are you actually getting requests that are 1.3 GB in size, or is something
else happening, like someone trying to make HTTP requests against the Kafka
broker port?
-Todd
On Mon, Dec 12, 2016 at 4:19 AM, Ramya Ramamurthy <
ramyaramamur...@teledna.com> wrote:
> We have got exactly
monitoring applications.
-Todd
On Mon, Dec 12, 2016 at 8:47 AM, Tim Visher wrote:
> I wonder if datadog monitoring triggers that behavior. That's the only
> other piece of our infrastructure that may have been talking to that topic.
>
> On Mon, Dec 12, 2016 at 12:40 AM, Suren
recreating a topic that has been deleted as it issues a metadata request to
try and find out what happened after an offset request for the topic fails.
-Todd
On Fri, Dec 9, 2016 at 8:37 AM, Tim Visher wrote:
> On Fri, Dec 9, 2016 at 11:34 AM, Todd Palino wrote:
>
> > Given that
have auto topic creation enabled?
-Todd
On Fri, Dec 9, 2016 at 8:25 AM, Tim Visher wrote:
> I did all of that because setting delete.topic.enable=true wasn't
> effective. We set that across every broker, restarted them, and then
> deleted the topic, and it was still stuck in exi
the special mirror maker that we have that does 1-to-1 mappings between
partitions for clusters that do not have auto topic creation enabled, the
topic creation (or partition count changes) are taken care of in the
message handler.
-Todd
On Mon, Dec 5, 2016 at 12:42 AM, James Cheng wrote:
>
You *could* go in to zookeeper and nuke the topic, then delete the files on
disk
Slightly more risky but it should work
On Wednesday, 5 October 2016, Manikumar wrote:
> Kafka doesn't support white spaces in topic names. Only support '.', '_'
> and '-' these are allowed.
> Not sure how you got
:
kafka-assigner -z zookeeper.example.com:2181 -e remove -b 1
That runs a bunch of partition reassignments to move all replicas off that
broker and distribute them to the other brokers in the cluster.
-Todd
On Thu, Sep 29, 2016 at 3:53 PM, Praveen wrote:
> I have 16 brokers. Now one of
So I can’t speak for general Gmail, but we have been using it through Gmail
internally here for a while. Just watch out for those rate limits, because
Burrow can get noisy (depending on your clusters and consumers)!
-Todd
On Mon, Aug 1, 2016 at 7:30 AM, Brian Dennis wrote:
> Burrow us
acted. The
coordinator needs to bootstrap the topic, and if log compaction is broken
that can take a very long time. During that time, it will return errors to
consumers for offset operations, and that can cause offset resets.
-Todd
On Thursday, July 14, 2016, Anderson Goulart
wrote:
> Hi,
>
is to have Burrow bootstrap the
__consumer_offsets topic from the oldest offsets, which should avoid some
confusion like this. However, there’s a couple things with higher priority
for me personally first.
-Todd
On Fri, Jul 8, 2016 at 9:22 AM, Tom Dearman wrote:
> Sorry, I should say o
the first
partition (at least not after Burrow was started).
-Todd
On Friday, July 8, 2016, Tom Dearman wrote:
> Todd,
>
> Thanks for that I am taking a look.
>
> Is there a bug whereby if you only have a couple of messages on a topic,
> both with the same key, that burrow doesn’t
u can consume plaintext and produce over SSL.
Obviously, encryption via produce still has some performance overhead. But
modern processors have optimized instructions for encryption. And not doing
it over the consume side avoids the hit on the brokers from losing the zero
copy send.
-Todd
On Wedn
t for a while now, and it seems pretty safe. At least
none of our Kafka developers have complained about us doing it :)
-Todd
On Wednesday, July 6, 2016, Kristoffer Sjögren wrote:
> That's awesome! I can see the JMX bean [1] in our current 0.8.2
> brokers and the number seems updated
my colleague Jon Bringhurst
profusely for helping to get the structure around the project and the
documentation cleaned up.
-Todd
--
*Todd Palino*
Staff Site Reliability Engineer
Data Infrastructure Streaming
linkedin.com/in/toddpalino
--
*Todd Palino*
Staff Site Reliability Engineer
Data
For more details, you can also check out my blog post on the release:
https://engineering.linkedin.com/apache-kafka/burrow-kafka-consumer-monitoring-reinvented
-Todd
On Wednesday, July 6, 2016, Tom Dearman wrote:
> I recently had a problem on my production which I believe was a
> manif
We do this through our monitoring agents by pulling it as a metric from the
LogEndOffset beans. By putting it into our metrics system we get a mapping
of timestamp to offset for every partition with (currently) 60 second
granularity. Useful for offset resets and other tasks.
-Todd
On Wednesday
Well, if you have a log compacted topic, you can issue a tombstone message
(key with a null message) to delete it. Outside of that, what Tom said
applies.
-Todd
On Tue, Jun 14, 2016 at 9:13 PM, Mudit Kumar wrote:
> Thanks Tom!
>
>
>
>
> On 6/14/16, 8:55 PM, "Tom Cr
topics or anything like that to make sure that your
weighted cluster balance is still where you want it to be, and manually fix
it if not.
-Todd
On Fri, Jun 10, 2016 at 5:26 AM, Alex Loddengaard wrote:
> Hi Kevin,
>
> If you keep the same configs on the new brokers with more storage capa
:-HeapDumpOnOutOfMemoryError -XX:+UseG1GC
-Todd
On Fri, Jun 10, 2016 at 7:09 AM, Dustin Cote wrote:
> Yes, but 16GB is probably not necessary and potentially detrimental.
> Please have a look at the doc here
> <http://kafka.apache.org/documentation.html#java> that shows what Lin
a round robin DNS entry?
-Todd
On Sun, Jun 5, 2016 at 6:34 AM, Ewen Cheslack-Postava
wrote:
> Note, however, that a load balancer can be useful for bootstrapping
> purposes, i.e. use it for the bootstrap.servers setting to have a single
> consistent value for the setting but allow t
partitions on the right brokers.
-Todd
On Wed, Jun 1, 2016 at 12:02 PM, Vladimir Picka
wrote:
> Does creating new topic with new name and the same settings as the
> original one and directly copying files from kafka log directory into the
> new topic folder work? It would be nice if it would
There's no way to do that. If you're trying to maintain data, you'll need
to read all the data from the existing topic and produce it to the new one.
-Todd
On Wednesday, June 1, 2016, Johannes Nachtwey <
johannes.nachtweyatw...@gmail.com> wrote:
> Hi guys,
>
&
Answers are in-line below.
-Todd
On Sun, May 29, 2016 at 3:00 PM, Igor Kravzov
wrote:
> Please help me with the subject.
> In Kafka documentations I found the following:
>
> *Kafka only provides a total order over messages within a partition, not
> between different partitions
look like.
-Todd
On Fri, May 6, 2016 at 4:07 AM, Andrew Backhouse
wrote:
> Hello,
>
> I trust you are well.
>
> There's a wealth of great articles and presentations relating to the Kafka
> logic and design, thank you for these. But, I'm unable to find informati
There will also be inter-broker replication traffic, and controller
communications (the controller runs on an elected broker in the
cluster). If you're using security features in Kafka 0.9, you may see
additional auth traffic between brokers.
That's all I can think of off the top of my head.
O
Rsyslog (8.15+) now supports producing to Kafka, and doesn't require java
(that can be a bonus). Rsyslog can use a disk buffer, then when it can
connect to Kafka, it will start streaming logs until the connection drops.
That's a pretty simple config, and there are lots of examples online.
T
”. If you are performing a rolling bounce, this can
conflict seriously with our shutdown check which assures that the cluster
under replicated count is zero before performing a shutdown.
-Todd
On Tue, Mar 29, 2016 at 1:29 PM, James Cheng wrote:
>
> > On Mar 29, 2016, at 10:33 AM, Todd Pali
criteria. I’m hoping to have more to say about this
later this week.
-Todd
On Tue, Mar 29, 2016 at 7:27 AM, Srikanth Chandika
wrote:
> Hi,
>
> I am new to kafka I am testing all the options in kafka.
> I am confused about the re-balancing?
> How and where to configure the re-b
cting various
sources. Then maybe try setting up the confluent stack, and looking at a more
end.to end solution.
There is a whole ecosystem being built around kafka to suit all the varied
interests and needs - find the ones that suit yours and start poking at it.
Cheers
Todd.
Sent fr
drop it down, you can then look at increasing the number of
partitions and increasing the number of logstash consumers further. While
you may get some benefit from increasing partitions without increasing the
consumer count, you’ll most likely have to do both.
-Todd
On Mon, Mar 7, 2016 at 8:46 AM
to make this a lot easier, and I’m in the process of getting
a repository set up to make it available via open source. It allows for
more easily removing and adding brokers, and rebalancing partitions in a
cluster without having to craft the reassignments by hand.
-Todd
On Fri, Mar 4, 2016 at 5
So long as you put some basic monitoring in place, it should run nicely with
very little intervention and let you be confident everything is as it should be.
Key things to watch:
* disk space - a disk filling up really makes things difficult for you. Make
sure your retention fits your footprint
jens.ran...@tink.se
> Phone: +46 708 84 18 32
> Web: www.tink.se
>
> Facebook <https://www.facebook.com/#!/tink.se> Linkedin
> <
> http://www.linkedin.com/company/2735919?trk=vsrp_companies_res_photo&trkInfo=VSRPsearchId%3A1057023381369207406670%2CVSRPtargetId%3A2735919%2CVSRPcmpt%3Aprimary
> >
> Twitter <https://twitter.com/tink>
>
--
*—-*
*Todd Palino*
Staff Site Reliability Engineer
Data Infrastructure Streaming
linkedin.com/in/toddpalino
the load.
-Todd
On Thu, Jan 14, 2016 at 9:25 AM, Gwen Shapira wrote:
> It depends on load :)
> As long as there is no contention, you are fine.
>
> On Thu, Jan 14, 2016 at 6:06 AM, Erik Forsberg wrote:
>
> > Hi!
> >
> > Pondering how to configure Kafka c
tool for deleting a consumer group.
-Todd
On Sat, Dec 19, 2015 at 11:34 AM, Akhilesh Pathodia <
pathodia.akhil...@gmail.com> wrote:
> What is the command to delete group from zookeeper? I dont find
> /consumer/ directory? I am using cloudera, is there any place on cloudera
>
.
-Todd
On Saturday, December 19, 2015, Akhilesh Pathodia <
pathodia.akhil...@gmail.com> wrote:
> What is the process for deleting the consumer group from zookeeper? Should
> I export offset, delete and then import?
>
> Thanks,
> Akhilesh
>
> On Fri, Dec 18, 2015 at 11:
Yes, that’s right. It’s just work for no real gain :)
-Todd
On Fri, Dec 18, 2015 at 9:38 AM, Marko Bonaći
wrote:
> Hmm, I guess you're right Tod :)
> Just to confirm, you meant that, while you're changing the exported file it
> might happen that one of the segment files b
offsets from the brokers and set them in Zookeeper for the consumer, by the
time you do that the smallest offset is likely no longer valid. This means
you’re going to resort to the offset reset logic anyways.
-Todd
On Fri, Dec 18, 2015 at 7:10 AM, Marko Bonaći
wrote:
> You can also do this:
preference).
-Todd
On Friday, December 18, 2015, Akhilesh Pathodia
wrote:
> Hi,
>
> I want to reset the kafka offset in zookeeper so that the consumer will
> start reading messages from first offset. I am using flume as a consumer to
> kafka. I have set the kafka property kafka.auto.
I want to say that the metrics only show up when the first message comes in,
but I could be thinking of another tool.
Try sending a message to the broker and see if metrics appear?
t.
-Original Message-
From: Wollert, Fabian [mailto:fabian.woll...@zalando.de]
Sent: Thursday, December 1
The quota page is here: http://kafka.apache.org/documentation.html#design_quotas
"By default, each unique client-id receives a fixed quota in bytes/sec as
configured by the cluster (quota.producer.default, quota.consumer.default)"
I also noticed there's been a change in the replication configur
moving
partitions from one mount point to another without shutting down the broker
and doing it manually.
-Todd
On Tue, Dec 1, 2015 at 4:31 AM, Guillermo Ortiz
wrote:
> Hello,
>
> I want to size the kafka cluster with just one topic and I'm going to
> process the data with
-workers)
4) Try switching to the service nodes writing directly to kafka, using
full Logstash, vs Logstash forwarder. That removes a bottleneck.
Happy to try and help, as this is stuff is currently on my mind. Maybe some
more details about what queues you’re seeing filled up?
Cheers,
producer will
attempt a failed request
-Todd
On Sun, Nov 22, 2015 at 12:31 PM, Jan Algermissen <
algermissen1...@icloud.com> wrote:
> Hi Todd,
>
> yes, correct - thanks.
>
> However, what I am not getting is that the KafkaProducer (see my other
> mail from today) silently
will
perform an unclean leader election and select broker 2 (the only replica
available) as the leader for those partitions.
-Todd
On Sun, Nov 22, 2015 at 11:39 AM, Jan Algermissen <
algermissen1...@icloud.com> wrote:
> Hi,
>
> I have a topic with 16 partitions that shows the follo
replica information. That assumes you can shut the whole cluster down for a
while.
-Todd
On Fri, Nov 6, 2015 at 1:23 PM, Arathi Maddula
wrote:
> Hi,
> Is it possible to change the broker.ids property for a node belonging to a
> Kafka cluster? For example, currently if I have brokers
themselves. Your producers would all send their messages through
the GSLB to that endpoint, rather than talking to Kafka directly.
-Todd
On Tue, Nov 3, 2015 at 10:15 AM, Cassa L wrote:
> Hi,
> Has anyone used load balancers between publishers and Kafka brokers? I
> want to do activ
authentication, and supports
very high throughout.
It's still actively being developed, with a new release coming soon with
enhanced configuration through a new rest api (kontroller).
Cheers
Todd.
Sent from my BlackBerry 10 smartphone on the TELUS network.
Original Message
From: Guozhang
(you can use partition reassignment to
change it). But if they are not all the same, some of the tooling will
break (such as altering the partition count for the topic).
-Todd
On Fri, Oct 16, 2015 at 5:39 PM, Todd Palino wrote:
> Actually, be very careful with this. There are two differ
t are currently considered to be in sync. The important
distinction here is that this list can be shorter than the actual assigned
replica list (from the znode above) if not all of the replicas are in sync.
The state znode also has a 'leader' key which holds the broker ID of the
replica that is
We've had no problems with G1 in all of our clusters with varying load
levels. I think we've seen an occasional long GC here and there, but
nothing recurring at this point.
What's the full command line that you're using with all the options?
-Todd
On Wed, Oct 14, 2015 at 2
retention is 1 week, replication will copy over the last week's worth
of data. That data will not be expired for 1 week, as the expiration is
based on the file modification time.
There is work ongoing that will resolve this extra retention problem.
-Todd
On Monday, October 12, 2015, Raja
can lose messages).
Basically, you have to trade availability for correctness here. You get to
pick one.
-Todd
On Sun, Oct 11, 2015 at 5:10 PM, wrote:
> You can enable unclean leader election, which would allow the lagging
> partition to still become leader. There would be some dat
You can enable unclean leader election, which would allow the lagging partition
to still become leader. There would be some data loss (offsets between the
leggy partition and the old leader) but the partition would stay online and
available.
Sent from my BlackBerry 10 smartphone on the TELUS
and if it can be handled separately inside of the
application.
-Todd
On Thu, Oct 8, 2015 at 8:50 AM, Mark Drago wrote:
> Gwen,
>
> Thanks for your reply. I understand all of the points you've made. I
> think the challenge for us is that we have some consumers that are
> intereste
What Python library are you using?
In addition, there's no real guarantee that any two libraries will
implement consumer balancing using the same algorithm (if they do it at
all).
-Todd
On Wednesday, September 30, 2015, Rahul R wrote:
> I have 2 kafka consumers. Both the consumers
for Cassandra or Redis or anything else).
Then your real consumers can all consume their separate topics. Reading and
writing the data one extra time is much better than rereading all of it 400
times and throwing most of it away.
-Todd
On Wed, Sep 30, 2015 at 9:06 AM, Ben Stopford wrote:
> Hi
sufficient.
-Todd
On Mon, Sep 28, 2015 at 9:53 AM, Jason Rosenberg wrote:
> Just to clarify too, if the only use case for log-compaction we use is for
> the __consumer_offsets, we should be ok, correct? I assume compression is
> not used by default for consumer offsets?
>
> Jason
>
trying to reduce it.
-Todd
On Saturday, September 26, 2015, noah wrote:
> Thanks, that gives us some more to look at.
>
> That is unfortunately a small section of the log file. When we hit this
> problem (which is not every time,) it will continue like that for hours.
>
>
be idle because they do not own partitions.
-Todd
On Fri, Sep 25, 2015 at 3:27 PM, noah wrote:
> We're seeing this the most on developer machines that are starting up
> multiple high level consumers on the same topic+group as part of service
> startup. The consumers do not se
tions.
-Todd
On Fri, Sep 25, 2015 at 1:25 PM, Gwen Shapira wrote:
> How busy are the clients?
>
> The brokers occasionally close idle connections, this is normal and
> typically not something to worry about.
> However, this shouldn't happen to consumers that are actively reading d
or so consumers moved
over to Kafka committed offsets at this point.
Of course, just those apps do cover well over a hundred consumer groups :)
-Todd
On Thursday, September 24, 2015, James Cheng wrote:
>
> > On Sep 24, 2015, at 8:11 PM, Todd Palino > wrote:
> >
> > W
are considered infrastructure
applications for Kafka), but we're not encouraging other internal users to
switch over just yet.
-Todd
On Wed, Sep 23, 2015 at 3:21 PM, James Cheng wrote:
>
> On Sep 18, 2015, at 10:25 AM, Todd Palino wrote:
>
> > I think the last major
ou've only
written in 1k of messages, you have a long way to go before that segment
gets rotated. This is why the retention is referred to as a minimum time.
You can easily retain much more than you're expecting for slow topics.
-Todd
On Mon, Sep 21, 2015 at 7:28 PM, allen chan
wrote:
1 - 100 of 211 matches
Mail list logo