You can enable producer debug log and verify. In 0.8.2.0, you can set
compressionType
, requiredNumAcks, syncSend producer config properties to log4j.xml. Trunk
build can take additional retries property .
Manikumar
On Thu, Jun 18, 2015 at 1:14 AM, Madhavi Sreerangam
HI,all
My server has only one hard drive, and now the disk IO appear bottlenecks, I
want to add two hard disk, but did not find the relevant online upgrade
document, please help
Hello, I use kafka 0.8.2 . I have a three borkers kafka cluster.
I stop one broker and copy recovery-point-offset-checkpoint to
override replication-offset-checkpoint. After that, I start the broker.
But I find that the broker could not be added to ISR anymore. And the
`logs/state-change.log`
The default value of queued.max.requests is 500. However, the sample
production config in the documentation (
http://kafka.apache.org/documentation.html#prodconfig)
sets queued.max.requests to 16.
Can anyone elaborate on the recommended value of 16 and the trade offs of
increasing or decreasing
Hello,
Thought you might find these resources interesting:
http://techblog.netflix.com/2015/06/nts-real-time-streaming-for-test.html
http://www.slideshare.net/wangxia5/netflix-kafka
Cheers,
Peter
Thanks Adam for your response.
I will have a mechanism to handle duplicates on the service consuming the
messages.
Just curious, if there is a way to identify the cause for receiving
duplicates.
I mean any log file that could help with this?
Regards,
Kris
On Wed, Jun 17, 2015 at 8:24 AM, Adam
Hi,
I have started the kafka server as a backgroud process, however, the following
INFO log appears on the console very 10 seconds.
Looks it is not an error since its log level is INFO. How could I suppress this
annoying log? Thanks
[2015-06-19 13:34:10,884] INFO Closing socket connection to
The kafka borker is stuck. And I restart the whole cluster. It works now.
Thank you very much.
On Thu, Jun 18, 2015 at 7:37 PM, haosdent haosd...@gmail.com wrote:
Hello, I use kafka 0.8.2 . I have a three borkers kafka cluster.
I stop one broker and copy recovery-point-offset-checkpoint to
What version of Kafka are you using? This was changed to debug level in
0.8.2.
~ Joestein
On Jun 18, 2015 10:39 PM, bit1...@163.com bit1...@163.com wrote:
Hi,
I have started the kafka server as a backgroud process, however, the
following INFO log appears on the console very 10 seconds.
Looks
The following are the jars in my classpath:
1. slf4j-log4j12-1.6.6.jar
2. slf4j-api-1.6.6.jar
3. zookeeper-3.4.6.jar
4. kafka_2.11-0.8.3-SNAPSHOT.jar
5. kafka_2.11-0.8.2.1.jar
6. kafka-clients-0.8.2.1.jar
7. metrics-core-2.2.0.jar
8. scala-library-2.11.5.jar
9. zkclient-0.3.jar
Am I missing
Thanks Peter!
Great information, especially challenges and strategies. We're rolling out
Kafka now and it's interesting to see how others have done it.
On Thu, Jun 18, 2015 at 10:17 AM, Peter Hausel peter.hau...@gmail.com
wrote:
Hello,
Thought you might find these resources interesting:
Hi,
I have a requirement of transferring around 500 GB of logs from app server to
hdfs per day. What will be the ideal kafka cluster size?
Thanks
Rajat
CONFIDENTIALITY NOTICE: This message is the property of International Game
Technology PLC and/or its subsidiaries and may contain proprietary,
I'm assuming you are sending data in a continuous stream and not a
single large batch:
500GB a day = 20GB an hour = 5MB a second.
A minimal 3 node cluster should work. You also need enough storage for
reasonable retention period (15TB per month).
On Thu, Jun 18, 2015 at 10:39 AM,
Sorry for spamming, but any help would be greatly appreciated!
On Thu, Jun 18, 2015 at 10:49 AM, Srividhya Anantharamakrishnan
srivid...@hedviginc.com wrote:
The following are the jars in my classpath:
1. slf4j-log4j12-1.6.6.jar
2. slf4j-api-1.6.6.jar
3. zookeeper-3.4.6.jar
4.
Hi, Folks
We have an online kafka cluster v0.8.1.1.
After running a partition reassignment script which maps each partition to
3 replicas. But growth of data is out of my expectation, and I really need
to decrease replicas for each partition to 2 or 1.
What's the best way to do this ?
Thanks !
We are in a situation where we need at least once delivery. We have a
thread that pulls messages off the consumer, puts them in a queue where
they go through a few async steps, and then after the final step, we want
to commit the offset to the messages we have completed. There may be items
we have
HI Carl,
Produce side retry can produce duplicated message being sent to brokers
with different offset with same message. Also, you may get duplicated when
the High Level Consumer offset is not being saved or commit but you have
processed data and your server restart etc...
To guaranteed
It looks like you have mixed up versions of the kafka jars:
4. kafka_2.11-0.8.3-SNAPSHOT.jar
5. kafka_2.11-0.8.2.1.jar
6. kafka-clients-0.8.2.1.jar
I think org.apache.kafka.common.utils.Utils is very new, probably post
0.8.2.1, so it's probably caused by the kafka_2.11-0.8.3-SNAPSHOT.jar being
If I use the same approach to reassign smaller number of replicas to the
same partition, I got this error :
(0,5,1,6,2,3) are the current replica, and (6) is the new list I want to
assign to topic partition 0
Assigned replicas (0,5,1,6,2,3) don't match the list of replicas for
reassignment (6)
Hi There,
while investigating higher than expected CPU utilization on one of our
Kafka brokers we noticed multiple instances of the kafka-network-thread
running in a BLOCKED state, all of whom are waiting for a single thread
to release a lock.
Here is an example from a stack trace:
(blocked
Hi Olof,
I am just wondering what is the benefit of rebalancing with minimal number
of reassignments here?
I am asking this because in new consumer, the rebalance actually is quite
cheap on the consumer side - just updating a topic partition set. That
means the actually rebalance time on
Well I solved my problem. For record, the problem was prior reassignment
was not finished while new reassignment was kicked out, in which case
replicas list won't change as expected. Given our data is at relatively
large scale, reassignment just stayed without progress for one day.
The Solution is
Any ideas on why one of the brokers which was down for a day, fails to restart
with exception as below? The 10-node cluster has been up and running fine for
quite a few weeks.
[2015-06-18 16:44:25,746] ERROR [app=broker] [main] There was an error in one
of the threads during logs loading:
Kafka 0.8.2.1
I have `unclean.leader.election.enable=false` in server.properties
I can see this log in server.log:
[2015-06-18 09:57:18,961] INFO Property unclean.leader.election.enable is
overridden to false (kafka.utils.VerifiableProperties)
Yet the topic was created with
24 matches
Mail list logo