Hello
How can I find in Kafka API in 0.8.1.1 count of uncommitted offsets
(unread message)from a particular topic .with the respective consumer group
I'd
I am looking after adminutils ,topic comand , and offsetrequest any
specific class of Kafka API which I can use to find am these things.
Hi all
Does anyone have info about this JMX metric
kafka.server:type=KafkaServer,name=BrokerState or what does the number
values means?
--
Allen Michael Chan
Hi, I tried building this today and the problem seems to remain.
/svante
[INFO] Building kafka-connect-hdfs 2.0.0-SNAPSHOT
[INFO]
Downloading:
Hi, Jason,
I tried the same command both with specifying a formatter and without - same
result:
=> /opt/kafka/bin/kafka-console-consumer.sh --formatter
kafka.server.OffsetManager\$OffsetsMessageFormatter --consumer.config
/tmp/consumer.properties --topic __consumer_offsets --zookeeper
0.9.0 client does not yet supporting admin requests like topic creation, so
you still need to do it through AdminUtils for now.
We plan to add this support after KIP-4 is adopted:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations
Marina,
To check if the topic does exist in Kafka (i.e. offsets are stored in Kafka
instead of in ZK) you can check this path in ZK:
/brokers/topics/__consumer_offsets
By default this topic should have 50 partitions.
Guozhang
On Thu, Dec 3, 2015 at 6:22 AM, Marina
I am attempting to understand the details of the content of the log segment
file in Kafka.
The documentation (http://kafka.apache.org/081/documentation.html#log)
suggests:
The exact binary format for messages is versioned and maintained as a standard
interface so message sets can be
Hello All,
I'm using "librdkafka" for my C Project which has is needed to support geo
redundant kafka process (zookeeper + Broker).
Machine 1 : Producer1 : Broker IP"sysctrl1.vsepx.broker.com:9092,
sysctrl2.vsepx.broker.com:9092"
Machine 2 : Broker1 :
Hi All,
Is it possible to create a topic programmatically with a specific topic
configuration (number of partitions, replication factor, retention time, etc)
using just the new 0.9.0 client jar?
-Erik
Hi, Guozhang,
Yes, I can see this topic and partitions in ZK:
ls /brokers/topics/__consumer_offsets
[partitions]
ls /brokers/topics/__consumer_offsets/partitions
[44, 45, 46, 47, 48, 49, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 0, 1, 2, 3, 4,
5, 6, 7, 8, 9, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
If you can validate these partitions have data (i.e. there are some offsets
committed to Kafka), then you may have to turn of debug level logging in
configs/tools-log4j.properties which will allow console consumer to print
debug level logs and see if there is anything suspicious.
Guozhang
On
Hi Marina,
You can hop onto your brokers and dump your __consumer_offsets logs
manually in order to see if anything is in them. Hop on each of your
brokers and run the following command:
for f in $(find /path/to/kafka-logs/__consumer_offsets-* -name "*\.log");
do
Hello,
We are on an older kafka (0.8.1) version. While a number of consumers were
running, we attempted to delete a few topics using the kafka-topics.sh file
(basically want to remove all messages in that topic and restart, since our
entities went through some incompatible changes). We
A little background:
I have a decent sized kafka cluster. Each of the data nodes has two NICs
with seperate IPs. We are finding that the distribution of network traffic
is not balancing between the two . Is there is a way to make it so that
producers write to kafka on one interface and
Hi,
messages are stored on disk in the Kafka (network) protocol format, so if
you have a look at the protocol guide you'll see the pieces start coming
together:
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-Messagesets
Regards,
Magnus
Good to know. Thanks Tao.
On Wed, Dec 2, 2015 at 5:42 PM, tao xiao wrote:
> It does help with increasing the poll timeout to Long.MAX_VALUE. I got
> messages in every poll but just the time between each poll is long. That is
> how I discovered it was an network issue btw
Delete was actually considered to be working since Kafka 0.8.2 (although
there are still not easily reproducible edge cases when it doesn't work
well even in in 0.8.2 or newer).
In 0.8.1 one could request topic to be deleted (request gets stored as
entry in ZooKeeper), because of presence of the
Hi Mayuresh
These are some of the relevant logs that I could find
[2015-12-03 16:04:23,594] INFO Loading log 'merckx.raw.event.type-0'
(kafka.log.LogManager)
[2015-12-03 16:04:23,595] INFO Completed load of log merckx.raw.event.type-0
with log end offset 2 (kafka.log.Log)
[2015-12-03
you can use the zookeeper shell inside the bin directory for that.
Thanks,
Mayuresh
On Thu, Dec 3, 2015 at 4:04 PM, Rakesh Vidyadharan <
rvidyadha...@gracenote.com> wrote:
> Thanks Stevo. I did see some messages related to /admin/delete_topics.
> Will do some research on how I can clean up
Thanks Mayuresh. I was able to use the shell to delete the entries and things
are working fine now.
On 03/12/2015 18:22, "Mayuresh Gharat" wrote:
>you can use the zookeeper shell inside the bin directory for that.
>
>Thanks,
>
>Mayuresh
>
>On Thu, Dec 3, 2015 at
Can you paste some logs from the controller, when you deleted the topic?
Thanks,
Mayuresh
On Thu, Dec 3, 2015 at 2:30 PM, Rakesh Vidyadharan <
rvidyadha...@gracenote.com> wrote:
> Hello,
>
> We are on an older kafka (0.8.1) version. While a number of consumers
> were running, we attempted to
Hi Rakesh,
Topic deletion didn't really work properly until 0.8.2. Here's a
stackoverflow link that summarizes how to work around this limitation:
http://stackoverflow.com/questions/24287900/delete-topic-in-kafka-0-8-1-1
HTH,
Steve
On Thu, Dec 3, 2015 at 3:33 PM, Mayuresh Gharat
Thanks Stevo. I did see some messages related to /admin/delete_topics. Will
do some research on how I can clean up zookeeper.
Thanks
Rakesh
On 03/12/2015 17:55, "Stevo Slavić" wrote:
>Delete was actually considered to be working since Kafka 0.8.2 (although
>there are
Thank you, Lance - this is very useful info!I did figure out what was wrong in
my case - the offsets were not stored in KAfka legitimately, they were stored
in Zookeeper, I was using a wrong command to inspect ZK content - doing 'ls
' instead of 'get '.
Once I used 'get' - I could see correct
24 matches
Mail list logo