Hi Allen,
I was referring to one of the issues here:
http://search-hadoop.com/m/uyzND1XVyK12UNtd32/kafka+orphaned/v=threaded
This linked thread discusses one of such issues where consumer lag was not
reported correctly.
Regards,
Prabhjot
On Sun, Nov 15, 2015 at 7:04 AM, allen chan
Hi All,
We are using the Apache Spark 1.5.1 and kafka_2.10-0.8.2.1 and Kafka
DirectStream API to fetch data from Kafka using Spark.
Kafka topic properties: Replication Factor :1 and Partitions : 1
Kafka cluster size: 3 Nodes
When all Kafka nodes are up & running, I could successfully get the
Data from single partition can not be consumed by multiple threads in a
consumer group. So having more consumer threads than the number of
partitions will be ended up that no data for some consumer threads.
Consumer parallelism depends on the number of partitions. If a topic has
more partitions
You may need to consider replication factor as well? Here you increased 4
times of partitions but if you are having 3 replication factor then it will
be (4 * 3) times of partition copies in the cluster.
On Sat, Nov 21, 2015 at 12:43 AM, Chen Song wrote:
> Any thoughts on
Any thoughts on this topic. Not sure if others have seen the same spike as
us.
On Tue, Nov 17, 2015 at 3:51 PM, Chen Song wrote:
> BTW, we are running Kafka 0.8.2.2.
>
> On Tue, Nov 17, 2015 at 3:48 PM, Chen Song wrote:
>
>> We have a cluster of
Spark specific questions are better directed to the Spark user list.
Spark will retry failed tasks automatically up to a configurable number of
times. The direct stream will retry failures on the driver up to a
configurable number of times.
See
Hello,
I am using Kafka for my thesis project. I need to do some performance tests. I
managed to send 5 MB data in producer performace test like below. I have 3
Zookeeper, 2 brokers, 1 producer and 1 consumer in a single machine for
testing. I tried with consumer perf test. But it failed.
Hello again,
I learned that the threads shouldn't be more than partitions. So, I did like
below.
bin/kafka-consumer-perf-test.sh --zookeeper localhost:2181 --messages 1000
--topic perftest --threads 1 --fetch-size 5242880 --socket-buffer-size
2147483646
start.time, end.time,
Also, if you actually want to use kafka, you're much better off with a
replication factor greater than 1, so you get leader re-election.
On Fri, Nov 20, 2015 at 9:20 AM, Cody Koeninger wrote:
> Spark specific questions are better directed to the Spark user list.
>
> Spark
Are there any command line or UI tools available to monitor kafka?
I am using latest stable release of Kafka and trying to post a message.
However I see this error:
Client:
Exception in thread "main" *kafka.common.FailedToSendMessageException*:
Failed to send messages after 3 tries.
at kafka.producer.async.DefaultEventHandler.handle(
On the server side this is what I see:
[2015-11-20 14:45:31,849] INFO Closing socket connection to /177.40.23.2.
(kafka.network.Processor)
On Fri, Nov 20, 2015 at 11:51 AM, Mohit Anchlia
wrote:
> I am using latest stable release of Kafka and trying to post a message.
>
LinkedIn made a great tool!
https://github.com/linkedin/Burrow
On Fri, Nov 20, 2015 at 10:32 AM, Mohit Anchlia
wrote:
> Are there any command line or UI tools available to monitor kafka?
>
Hey Siyuan,
The commit API should work the same regardless whether subscribe() or
assign() was used. Does this not appear to be working?
Thanks,
Jason
On Wed, Nov 18, 2015 at 4:40 PM, hsy...@gmail.com wrote:
> In the new API, the explicit commit offset method call only works
Yonghui,
What is the ack mode for the producer clients? And are msg1 and msg2 sent
by the same producer?
Guozhang
On Thu, Nov 19, 2015 at 10:59 PM, Yonghui Zhao
wrote:
> Broker setting is: 8 partitions, 1 replica, kafka version 0.8.1
>
> We send 2 message at almost
I suppose I should have added one qualification to that. The commit API
will not work for a consumer using manual assignment if its groupId is
shared with another consumer using automatic assignment (with subscribe()).
When a consumer group is active, Kafka only allows commits from members of
that
Yonghui,
You can use ZookeeperConsumerConnector.commitOffsets() to commit arbitrary
offsets, but be careful using it to seek forward / backward: you need to
make sure everyone reads the committed offsets right after it is written
(e.g. force a rebalance), and no one else override it beforehand.
This is the fourth candidate for release of Apache Kafka 0.9.0.0. This a
major release that includes (1) authentication (through SSL and SASL) and
authorization, (2) a new java consumer, (3) a Kafka connect framework for
data ingestion and egression, and (4) quotas. Since this is a major
release,
18 matches
Mail list logo