Are there any? My app ran for a few hours and filled a 100G partition (on 5
machines).
Any settings to keep this growth in check? Perhaps to estimate how much
space it's going to need?
It's basically just a consumer as any other. The application.id is used
as consumer group.id.
So just use the available tools you do use to check consumer lag.
-Matthias
On 12/9/16 5:49 PM, Jon Yeargers wrote:
> How would this be done?
>
signature.asc
Description: OpenPGP digital signature
Have a look into this thread:
http://mail-archives.apache.org/mod_mbox/kafka-users/201606.mbox/%3c49878526-ad9b-42ef-a220-728a5ae13...@parkassist.com%3E
On 12/10/16 4:03 AM, Jon Yeargers wrote:
> Are there any? My app ran for a few hours and filled a 100G partition (on 5
> machines).
>
> Any
Hi everyone,
What is among the most efficient ways to fast consume, transform and process
Kafka messages? Importantly, I am not referring nor interested in streams,
because the Kafka topic from which I would like to process the messages will
eventually stop receiving messages, after which I
Hi Kafka Users,
I'm looking for a bit of clarification on the documentation for
implementing a SourceTask. I'm reading a replication stream from a
database in my SourceTask, and I'd like to use commit or commitRecord to
advance the other system's replication stream pointer so that it knows I
Are you running something else besides the consumers that would maintain a
memory of the topics and potentially recreate them by issuing a metadata
request? For example, Burrow (the consumer monitoring app I wrote) does
this because it maintains a list of all topics in memory, and will end up
This error doesn't necessarily mean that a broker is down, it can also mean
that too many replicas for that topic partition have fallen behind the
leader. This indicates your replication is lagging for some reason.
You'll want to be monitoring some of the metrics listed here:
As documented here http://kafka.apache.org/documentation#os Linux and
Solaris have been tested, with Linux being the most common platform and the
one regularly tested within the project itself. Since AIX is a Unix it'll
probably work fine there, and I believe IBM at least provided documentation
if
The simple consumer doesn't do auto-commit. It really only issues
individual low-level Kafka protocol request types, so `commitOffsets` is
the only way it should write offsets.
Is it possible it crashed after the request was sent but before finishing
reading the response?
Side-note: I know you
Consumer groups aren't going to handle 'let it crash' particularly well
(and really any session-based services, but particularly consumer groups
since a single failure affects the entire group). That said, 'let it crash'
doesn't necessarily have to mean 'don't try to clean up at all'. The
consumer
(Am reporting these as have moved to 0.10.1.0-cp2)
ERROR o.a.k.c.c.i.ConsumerCoordinator - User provided listener
org.apache.kafka.streams.processor.internals.StreamThread$1 for group
MinuteAgg failed on partition assignment
java.lang.IllegalStateException: task [1_9] Log end offset should not
Sai,
Attachments to the mailing list get filtered, you'll need to paste the
relevant info into your email.
-Ewen
On Fri, Dec 9, 2016 at 1:18 AM, Sai Karthi Golagani <
skgolagani...@fishbowl.com> wrote:
> Hi team,
>
> I’ve recently setup kafka server on a edge node and zk on 3 separate
>
This came up a few times today:
2016-12-10 18:45:52,637 [StreamThread-1] ERROR
o.a.k.s.p.internals.StreamThread - stream-thread [StreamThread-1] Failed to
create an active task %s:
org.apache.kafka.streams.errors.ProcessorStateException: task [0_0] Error
while creating the state manager
On the producer side, there's not much you can do to reduce CPU usage if
you want low latency and don't have enough throughput to buffer multiple
messages -- you're going to end up sending 1 record at a time in order to
achieve your desired latency. Note, however, that the producer is thread
safe,
Does Kafka automatically replicate the under replicated partitions?
I looked at these metrics through jmxterm the Value of
Underreplicatedpartition came out to be 0. What are the additional places
or metrics to look? There seems to be lack of documentation on Kafka
administration when it comes to
Ive added the 'until()' clause to some aggregation steps and it's working
wonders for keeping the size of the state store in useful boundaries... But
Im not 100% clear on how it works.
What is implied by the '.until()' clause? What determines when to stop
receiving further data - is it clock time
It's tough to read that stacktrace, but if I understand what you mean by
"running the kafka mirroring in destination cluster which is 0.10.1.0
version of kafka", then the problem is that you cannot use 0.10.1.0 mirror
maker with an 0.8.1. cluster. MirrorMaker is both a producer and consumer,
so
You may actually want this implemented in a Streams app eventually, there
is a KIP being discussed to support this type of incremental batch
processing in Streams:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-95%3A+Incremental+Batch+Processing+for+Kafka+Streams
However, for now the
Hagen,
What does "new consumer doesn't like the old brokers" mean exactly?
When upgrading MM, remember that it uses the clients internally so the same
compatibility rules apply: you need to upgrade both sets of brokers before
you can start using the new version of MM.
-Ewen
On Thu, Dec 8, 2016
19 matches
Mail list logo