Kafka uncomitted offset

2015-12-03 Thread Gaurav Agarwal
Hello
How can I find in Kafka API in 0.8.1.1 count of  uncommitted offsets
(unread message)from a particular topic .with the respective consumer group
I'd
I am looking after adminutils ,topic comand   , and offsetrequest any
specific class of Kafka API which I can use to find am these things.


BrokerState JMX Metric

2015-12-03 Thread allen chan
Hi all

Does anyone have info about this JMX metric
kafka.server:type=KafkaServer,name=BrokerState or what does the number
values means?

-- 
Allen Michael Chan


Re: kafka connect(copycat) question

2015-12-03 Thread Svante Karlsson
Hi, I tried building this today and the problem seems to remain.

/svante



[INFO] Building kafka-connect-hdfs 2.0.0-SNAPSHOT
[INFO]

Downloading:
http://packages.confluent.io/maven/io/confluent/kafka-connect-avro-converter/2.0.0-SNAPSHOT/maven-metadata.xml
Downloading:
http://packages.confluent.io/maven/io/confluent/kafka-connect-avro-converter/2.0.0-SNAPSHOT/kafka-connect-avro-converter-2.0.0-SNAPSHOT.pom
[WARNING] The POM for
io.confluent:kafka-connect-avro-converter:jar:2.0.0-SNAPSHOT is missing, no
dependency information available
Downloading:
http://packages.confluent.io/maven/io/confluent/common-config/2.0.0-SNAPSHOT/maven-metadata.xml
Downloading:
http://packages.confluent.io/maven/io/confluent/common-config/2.0.0-SNAPSHOT/common-config-2.0.0-SNAPSHOT.pom
[WARNING] The POM for io.confluent:common-config:jar:2.0.0-SNAPSHOT is
missing, no dependency information available
Downloading:
http://packages.confluent.io/maven/io/confluent/kafka-connect-avro-converter/2.0.0-SNAPSHOT/kafka-connect-avro-converter-2.0.0-SNAPSHOT.jar
Downloading:
http://packages.confluent.io/maven/io/confluent/common-config/2.0.0-SNAPSHOT/common-config-2.0.0-SNAPSHOT.jar
[INFO]

[INFO] BUILD FAILURE


> --
> Thanks,
> Ewen
>


Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?

2015-12-03 Thread Marina
Hi, Jason,

I tried the same command both with specifying a formatter and without - same 
result:

=> /opt/kafka/bin/kafka-console-consumer.sh --formatter 
kafka.server.OffsetManager\$OffsetsMessageFormatter --consumer.config 
/tmp/consumer.properties --topic __consumer_offsets --zookeeper localhost:2181 
--from-beginning

^CConsumed 0 messages


When you are saying to make sure Consumers are using Kafka instead of ZK 
storage for offset... - what is the best way to do that? I thought checking the 
content via ZK shell - at this path:
/consumers//offsets//

would be a definitive confirmation (there are no values stored there). Is it 
true?

I feel it is some small stupid mistake I'm making, since it seems to work fine 
for others - drives me crazy :)

thanks!!
Marina



- Original Message -
From: Jason Gustafson 
To: users@kafka.apache.org
Sent: Wednesday, December 2, 2015 2:33 PM
Subject: Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?

Hey Marina,

My mistake, I see you're using 0.8.2.1. Are you also providing the
formatter argument when using console-consumer.sh? Perhaps something like
this:

bin/kafka-console-consumer.sh --formatter
kafka.server.OffsetManager\$OffsetsMessageFormatter --zookeeper
localhost:2181 --topic __consumer_offsets --from-beginning

You may also want to confirm that your consumers are using Kafka instead of
Zookeeper for offset storage. If you still don't see anything, we can
always look into the partition data directly...

-Jason


On Wed, Dec 2, 2015 at 11:13 AM, Jason Gustafson  wrote:

> Looks like you need to use a different MessageFormatter class, since it
> was renamed in 0.9. Instead use something like
> "kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter".
>
> -Jason
>
> On Wed, Dec 2, 2015 at 10:57 AM, Dhyan Muralidharan <
> d.muralidha...@yottaa.com> wrote:
>
>> I have this same problem . Can someone help ?
>>
>> --Dhyan
>>
>> On Wed, Nov 25, 2015 at 3:31 PM, Marina  wrote:
>>
>> > Hello,
>> >
>> > I'm trying to find out which offsets my current High-Level consumers are
>> > working off. I use Kafka 0.8.2.1, with **no** "offset.storage" set in
>> the
>> > server.properties of Kafka - which, I think, means that offsets are
>> stored
>> > in Kafka. (I also verified that no offsets are stored in Zookeeper by
>> > checking this path in the Zk shell:
>> >
>> **/consumers//offsets//**
>> > )
>> >
>> > I tried to listen to the **__consumer_offsets** topic to see which
>> > consumer saves what value of offsets, but it did not work...
>> >
>> > I tried the following:
>> >
>> > created a config file for console consumer as following:
>> >
>> >
>> > => more kafka_offset_consumer.config
>> >
>> >  exclude.internal.topics=false`
>> >
>> > and tried to versions of the console consumer scripts:
>> >
>> > #1:
>> > bin/kafka-console-consumer.sh --consumer.config
>> > kafka_offset_consumer.config --topic __consumer_offsets --zookeeper
>> > localhost:2181
>> >
>> > #2
>> > ./bin/kafka-simple-consumer-shell.sh --topic __consumer_offsets
>> > --partition 0 --broker-list localhost:9092 --formatter
>> > "kafka.server.OffsetManager\$OffsetsMessageFormatter" --consumer.config
>> > kafka_offset_consumer.config
>> >
>> >
>> > Neither worked - it just sits there but does not print anything, even
>> > though the consumers are actively consuming/saving offsets.
>> >
>> > Am I missing some other configuration/properties ?
>> >
>> > thanks!
>> >
>> > Marina
>> >
>> > I have also posted this question on StackOverflow:
>> >
>> >
>> >
>> http://stackoverflow.com/questions/33925866/kafka-0-8-2-1-how-to-read-from-consumer-offsets-topic
>> >
>>
>
>


Re: Topic creation using new Client Jar - 0.9.0

2015-12-03 Thread Guozhang Wang
0.9.0 client does not yet supporting admin requests like topic creation, so
you still need to do it through AdminUtils for now.

We plan to add this support after KIP-4 is adopted:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-4+-+Command+line+and+centralized+administrative+operations

Guozhang

On Thu, Dec 3, 2015 at 7:20 AM, Helleren, Erik 
wrote:

> Hi All,
> Is it possible to create a topic programmatically with a specific topic
> configuration (number of partitions, replication factor, retention time,
> etc) using just the new 0.9.0 client jar?
> -Erik
>



-- 
-- Guozhang


Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?

2015-12-03 Thread Guozhang Wang
Marina,

To check if the topic does exist in Kafka (i.e. offsets are stored in Kafka
instead of in ZK) you can check this path in ZK:

/brokers/topics/__consumer_offsets

By default this topic should have 50 partitions.

Guozhang


On Thu, Dec 3, 2015 at 6:22 AM, Marina  wrote:

> Hi, Jason,
>
> I tried the same command both with specifying a formatter and without -
> same result:
>
> => /opt/kafka/bin/kafka-console-consumer.sh --formatter
> kafka.server.OffsetManager\$OffsetsMessageFormatter --consumer.config
> /tmp/consumer.properties --topic __consumer_offsets --zookeeper
> localhost:2181 --from-beginning
>
> ^CConsumed 0 messages
>
>
> When you are saying to make sure Consumers are using Kafka instead of ZK
> storage for offset... - what is the best way to do that? I thought checking
> the content via ZK shell - at this path:
> /consumers//offsets//
>
> would be a definitive confirmation (there are no values stored there). Is
> it true?
>
> I feel it is some small stupid mistake I'm making, since it seems to work
> fine for others - drives me crazy :)
>
> thanks!!
> Marina
>
>
>
> - Original Message -
> From: Jason Gustafson 
> To: users@kafka.apache.org
> Sent: Wednesday, December 2, 2015 2:33 PM
> Subject: Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?
>
> Hey Marina,
>
> My mistake, I see you're using 0.8.2.1. Are you also providing the
> formatter argument when using console-consumer.sh? Perhaps something like
> this:
>
> bin/kafka-console-consumer.sh --formatter
> kafka.server.OffsetManager\$OffsetsMessageFormatter --zookeeper
> localhost:2181 --topic __consumer_offsets --from-beginning
>
> You may also want to confirm that your consumers are using Kafka instead of
> Zookeeper for offset storage. If you still don't see anything, we can
> always look into the partition data directly...
>
> -Jason
>
>
> On Wed, Dec 2, 2015 at 11:13 AM, Jason Gustafson 
> wrote:
>
> > Looks like you need to use a different MessageFormatter class, since it
> > was renamed in 0.9. Instead use something like
> > "kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter".
> >
> > -Jason
> >
> > On Wed, Dec 2, 2015 at 10:57 AM, Dhyan Muralidharan <
> > d.muralidha...@yottaa.com> wrote:
> >
> >> I have this same problem . Can someone help ?
> >>
> >> --Dhyan
> >>
> >> On Wed, Nov 25, 2015 at 3:31 PM, Marina 
> wrote:
> >>
> >> > Hello,
> >> >
> >> > I'm trying to find out which offsets my current High-Level consumers
> are
> >> > working off. I use Kafka 0.8.2.1, with **no** "offset.storage" set in
> >> the
> >> > server.properties of Kafka - which, I think, means that offsets are
> >> stored
> >> > in Kafka. (I also verified that no offsets are stored in Zookeeper by
> >> > checking this path in the Zk shell:
> >> >
> >>
> **/consumers//offsets//**
> >> > )
> >> >
> >> > I tried to listen to the **__consumer_offsets** topic to see which
> >> > consumer saves what value of offsets, but it did not work...
> >> >
> >> > I tried the following:
> >> >
> >> > created a config file for console consumer as following:
> >> >
> >> >
> >> > => more kafka_offset_consumer.config
> >> >
> >> >  exclude.internal.topics=false`
> >> >
> >> > and tried to versions of the console consumer scripts:
> >> >
> >> > #1:
> >> > bin/kafka-console-consumer.sh --consumer.config
> >> > kafka_offset_consumer.config --topic __consumer_offsets --zookeeper
> >> > localhost:2181
> >> >
> >> > #2
> >> > ./bin/kafka-simple-consumer-shell.sh --topic __consumer_offsets
> >> > --partition 0 --broker-list localhost:9092 --formatter
> >> > "kafka.server.OffsetManager\$OffsetsMessageFormatter"
> --consumer.config
> >> > kafka_offset_consumer.config
> >> >
> >> >
> >> > Neither worked - it just sits there but does not print anything, even
> >> > though the consumers are actively consuming/saving offsets.
> >> >
> >> > Am I missing some other configuration/properties ?
> >> >
> >> > thanks!
> >> >
> >> > Marina
> >> >
> >> > I have also posted this question on StackOverflow:
> >> >
> >> >
> >> >
> >>
> http://stackoverflow.com/questions/33925866/kafka-0-8-2-1-how-to-read-from-consumer-offsets-topic
> >> >
> >>
> >
> >
>



-- 
-- Guozhang


Trying to understand the format of the LogSegment file.

2015-12-03 Thread Steve Graham
I am attempting to understand the details of the content of the log segment 
file in Kafka.

The documentation (http://kafka.apache.org/081/documentation.html#log)  
suggests:
The exact binary format for messages is versioned and maintained as a standard 
interface so message sets can be transfered between producer, broker, and 
client without recopying or conversion when desirable. This format is as 
follows:

On-disk format of a message

message length : 4 bytes (value: 1+4+n) 
"magic" value  : 1 byte
crc: 4 bytes
payload: n bytes


But I am struggling to map the documentation to what I see on the disk.

I created a topic, named simple-topic, and added one message to it (via the 
console producer).  The message payload was “message1”.

The DumpLogSegments tool shows:
Dumping /tmp/kafka-logs/sample-topic-0/.log
Starting offset: 0
offset: 0 position: 0 isvalid: true payloadsize: 8 magic: 0 compresscodec: 
NoCompressionCodec crc: 3916773564

Taking a hex dump of the (only) log file:
sample-topic-0 sgg$ hexdump -C .log | more
  00 00 00 00 00 00 00 00  00 00 00 16 e9 75 38 bc  |.u8.|
0010  00 00 ff ff ff ff 00 00  00 08 6d 65 73 73 61 67  |..messag|
0020  65 31 |e1|
0022

I tried to “reverse engineer” the contents, to see how it corresponds to the 
documentation:

Bytes 0-7 (00 00 00 00 00 00 00 00).  I am not sure what this is, some sort of 
filler?
Bytes 8-11 (00 00 00 16) seems to be some length field?  Decimal 22, which 
seems to correspond to the length of the entire message, but more than 1+4+n 
than suggested by the documentation
Bytes 12-15 (e9 75 38 bc) this corresponds to the CRC (decimal 3916773564).  No 
problem here.
Bytes 16-17 (00 00) not sure what this is.
Bytes 18-21 (ff ff ff ff) not sure what this is. A “magic number”?  But that 
should be just one byte.  Must be something else?
Bytes 22-25 (00 00  00 08) is the message payload size (8), this is the value 
of “n” in the formula for message length, exactly the length of the “message1” 
string. No problem here.
Bytes 26-33 (6d 65 73 73 61 67 65 31) is the payload (ascii: message1).  No 
problem here.

Can anyone on the list help me reconcile the documentation to what I see on the 
disk?  Specifically:
a) what are the first 8 bytes supposed to represent?  
b) the message length field as described as 1+4+n doesn’t correspond with what 
I see on disk.  It looks like 4 (crc) + 2 (??) + 4 (?magic number?) + 4 
(payload length) + 8 (n).  What is the correct formula?
c) why does the CRC appear so early in the message (bytes 8-11), shouldn’t the 
magic value appear before the CRC?
d) what is the way to interpret bytes 16-21?  is the magic number in here 
somewhere?  What else is in this set of bytes?

Thanks
sgg



Error in handling geo redundant kafka process

2015-12-03 Thread Mazhar Shaikh
Hello All,

I'm using "librdkafka" for my C Project which has is needed to support geo
redundant kafka process (zookeeper + Broker).

Machine 1 : Producer1 :   Broker IP"sysctrl1.vsepx.broker.com:9092,
sysctrl2.vsepx.broker.com:9092"

Machine 2 :  Broker1  : advertised.host.name=sysctrl1.vsepx.broker.com
Machine 3 :  Broker2 : advertised.host.name=sysctrl2.vsepx.broker.com


Scenario:
1. Broker1 is UP, Broker 2 is Down.
2. Producer1 is sending messages to  Broker1 successfully
3. Broker1 goes down.
4. Broker2 comes UP
5. Producer1 connection with broker 1 fails, But Connection with
Broker2 is established.
6. Producer1 exchanges meta data with Broker2, But messages are not
being sent to Broker2, Instead the producer1 is still trying to send the
same to Broker1 (which is down).
7. All Messages Fail.

Now, If I kill and restart Producer1, then all the new messages are
succesfully sent to Broker2.

Broker logs:

[2015-12-03 09:28:24,573] INFO New leader is 0
(kafka.server.ZookeeperLeaderElector$LeaderChangeListener)
[2015-12-03 09:28:24,579] INFO Registered broker 0 at path /brokers/ids/0
with address sysctrl2.vsepx.broker.com:9092. (kafka.utils.ZkUtils$)
[2015-12-03 09:28:24,593] INFO [Kafka Server 0], started
(kafka.server.KafkaServer)
[2015-12-03 09:28:24,718] INFO [ReplicaFetcherManager on broker 0] Removed
fetcher for partitions
[topic1,1],[topic1,0],[topic1,10],[topic1,15],[topic1,9],[topic1,4],[topic1,8],[topic1,3],[topic1,13],[topic1,12],[topic1,2],[topic1,6],[topic1,7],[topic1,5],[topic1,11],
topic1,14

[2015-12-03 09:28:24,982] INFO [ReplicaFetcherManager on broker 0] Removed
fetcher for partitions
[topic1,1],[topic1,0],[topic1,10],[topic1,15],[topic1,9],[topic1,4],[topic1,8],[topic1,3],[topic1,13],[topic1,12],[topic1,2],[topic1,6],[topic1,7],[topic1,5],[topic1,11],
topic1,14

[2015-12-03 09:30:10,195] INFO Closing socket connection to /12.1.1.81.
(kafka.network.Processor)

Below are few logs from sample stub code


% Sent 3 bytes to topic topic1 partition -1, outQlen[127]
log_cb : rdkafka#producer-0: [7] [RECV] [
sysctrl2.vsepx.broker.com:9092/bootstrap: Received MetadataResponse (473
bytes, CorrId 12, rtt 142.35ms)]
log_cb : rdkafka#producer-0: [7] [METADATA] [
sysctrl2.vsepx.broker.com:9092/bootstrap: = Received metadata from
sysctrl2.vsepx.broker.com:9092/bootstrap =]
log_cb : rdkafka#producer-0: [7] [METADATA] [
sysctrl2.vsepx.broker.com:9092/bootstrap: 1 brokers, 1 topics]
log_cb : rdkafka#producer-0: [7] [METADATA] [
sysctrl2.vsepx.broker.com:9092/bootstrap: Broker #0/1:
sysctrl2.vsepx.broker.com:9092 NodeId 0]
log_cb : rdkafka#producer-0: [7] [METADATA] [
sysctrl2.vsepx.broker.com:9092/bootstrap: Topic #0/1: topic1 with 16
partitions]
log_cb : rdkafka#producer-0: [7] [PARTCNT] [No change in partition count
for topic topic1]
log_cb : rdkafka#producer-0: [7] [METADATA] [
sysctrl2.vsepx.broker.com:9092/bootstrap: Topic topic1 partition 8 Leader 0]
log_cb : rdkafka#producer-0: [7] [BRKDELGT] [Broker
sysctrl1.vsepx.broker.com:9092/0 is now leader for topic topic1 [8] with 8
messages (19 bytes) queued]
log_cb : rdkafka#producer-0: [7] [METADATA] [
sysctrl2.vsepx.broker.com:9092/bootstrap: Topic topic1 partition 11 Leader
0]
log_cb : rdkafka#producer-0: [7] [BRKDELGT] [Broker
sysctrl1.vsepx.broker.com:9092/0 is now leader for topic topic1 [11] with 8
messages (19 bytes) queued]
log_cb : rdkafka#producer-0: [7] [METADATA] [
sysctrl2.vsepx.broker.com:9092/bootstrap: Topic topic1 partition 2 Leader 0]
log_cb : rdkafka#producer-0: [7] [BRKDELGT] [Broker
sysctrl1.vsepx.broker.com:9092/0 is now leader for topic topic1 [2] with 8
messages (19 bytes) queued]
log_cb : rdkafka#producer-0: [7] [METADATA] [
sysctrl2.vsepx.broker.com:9092/bootstrap: Topic topic1 partition 5 Leader 0]
log_cb : rdkafka#producer-0: [7] [BRKDELGT] [Broker
sysctrl1.vsepx.broker.com:9092/0 is now leader for topic topic1 [5] with 8
messages (20 bytes) queued]
log_cb : rdkafka#producer-0: [7] [METADATA] [
sysctrl2.vsepx.broker.com:9092/bootstrap: Topic topic1 partition 14 Leader
0]
log_cb : rdkafka#producer-0: [7] [BRKDELGT] [Broker
sysctrl1.vsepx.broker.com:9092/0 is now leader for topic topic1 [14] with 8
messages (19 bytes) queued]
log_cb : rdkafka#producer-0: [7] [METADATA] [
sysctrl2.vsepx.broker.com:9092/bootstrap: Topic topic1 partition 4 Leader 0]
log_cb : rdkafka#producer-0: [7] [BRKDELGT] [Broker
sysctrl1.vsepx.broker.com:9092/0 is now leader for topic topic1 [4] with 8
messages (20 bytes) queued]
log_cb : rdkafka#producer-0: [7] [METADATA] [
sysctrl2.vsepx.broker.com:9092/bootstrap: Topic topic1 partition 13 Leader
0]
log_cb : rdkafka#producer-0: [7] [BRKDELGT] [Broker
sysctrl1.vsepx.broker.com:9092/0 is now leader for topic topic1 [13] with 8
messages (19 bytes) queued]
log_cb : rdkafka#producer-0: [7] [METADATA] [

Topic creation using new Client Jar - 0.9.0

2015-12-03 Thread Helleren, Erik
Hi All,
Is it possible to create a topic programmatically with a specific topic 
configuration (number of partitions, replication factor, retention time, etc) 
using just the new 0.9.0 client jar?
-Erik


Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?

2015-12-03 Thread Marina
Hi, Guozhang,
Yes, I can see this topic and partitions in ZK:

ls /brokers/topics/__consumer_offsets
[partitions]
ls /brokers/topics/__consumer_offsets/partitions
[44, 45, 46, 47, 48, 49, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 0, 1, 2, 3, 4, 
5, 6, 7, 8, 9, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 
36, 37, 38, 39, 40, 41, 42, 43]


example of one partition state:
ls /brokers/topics/__consumer_offsets/partitions/44
[state]
ls /brokers/topics/__consumer_offsets/partitions/44/state
[]


anything else you think I could check?

thanks!
Marina



- Original Message -
From: Guozhang Wang 
To: "users@kafka.apache.org" ; Marina 
Sent: Thursday, December 3, 2015 12:05 PM
Subject: Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?

Marina,

To check if the topic does exist in Kafka (i.e. offsets are stored in Kafka
instead of in ZK) you can check this path in ZK:

/brokers/topics/__consumer_offsets

By default this topic should have 50 partitions.

Guozhang



On Thu, Dec 3, 2015 at 6:22 AM, Marina  wrote:

> Hi, Jason,
>
> I tried the same command both with specifying a formatter and without -
> same result:
>
> => /opt/kafka/bin/kafka-console-consumer.sh --formatter
> kafka.server.OffsetManager\$OffsetsMessageFormatter --consumer.config
> /tmp/consumer.properties --topic __consumer_offsets --zookeeper
> localhost:2181 --from-beginning
>
> ^CConsumed 0 messages
>
>
> When you are saying to make sure Consumers are using Kafka instead of ZK
> storage for offset... - what is the best way to do that? I thought checking
> the content via ZK shell - at this path:
> /consumers//offsets//
>
> would be a definitive confirmation (there are no values stored there). Is
> it true?
>
> I feel it is some small stupid mistake I'm making, since it seems to work
> fine for others - drives me crazy :)
>
> thanks!!
> Marina
>
>
>
> - Original Message -
> From: Jason Gustafson 
> To: users@kafka.apache.org
> Sent: Wednesday, December 2, 2015 2:33 PM
> Subject: Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?
>
> Hey Marina,
>
> My mistake, I see you're using 0.8.2.1. Are you also providing the
> formatter argument when using console-consumer.sh? Perhaps something like
> this:
>
> bin/kafka-console-consumer.sh --formatter
> kafka.server.OffsetManager\$OffsetsMessageFormatter --zookeeper
> localhost:2181 --topic __consumer_offsets --from-beginning
>
> You may also want to confirm that your consumers are using Kafka instead of
> Zookeeper for offset storage. If you still don't see anything, we can
> always look into the partition data directly...
>
> -Jason
>
>
> On Wed, Dec 2, 2015 at 11:13 AM, Jason Gustafson 
> wrote:
>
> > Looks like you need to use a different MessageFormatter class, since it
> > was renamed in 0.9. Instead use something like
> > "kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter".
> >
> > -Jason
> >
> > On Wed, Dec 2, 2015 at 10:57 AM, Dhyan Muralidharan <
> > d.muralidha...@yottaa.com> wrote:
> >
> >> I have this same problem . Can someone help ?
> >>
> >> --Dhyan
> >>
> >> On Wed, Nov 25, 2015 at 3:31 PM, Marina 
> wrote:
> >>
> >> > Hello,
> >> >
> >> > I'm trying to find out which offsets my current High-Level consumers
> are
> >> > working off. I use Kafka 0.8.2.1, with **no** "offset.storage" set in
> >> the
> >> > server.properties of Kafka - which, I think, means that offsets are
> >> stored
> >> > in Kafka. (I also verified that no offsets are stored in Zookeeper by
> >> > checking this path in the Zk shell:
> >> >
> >>
> **/consumers//offsets//**
> >> > )
> >> >
> >> > I tried to listen to the **__consumer_offsets** topic to see which
> >> > consumer saves what value of offsets, but it did not work...
> >> >
> >> > I tried the following:
> >> >
> >> > created a config file for console consumer as following:
> >> >
> >> >
> >> > => more kafka_offset_consumer.config
> >> >
> >> >  exclude.internal.topics=false`
> >> >
> >> > and tried to versions of the console consumer scripts:
> >> >
> >> > #1:
> >> > bin/kafka-console-consumer.sh --consumer.config
> >> > kafka_offset_consumer.config --topic __consumer_offsets --zookeeper
> >> > localhost:2181
> >> >
> >> > #2
> >> > ./bin/kafka-simple-consumer-shell.sh --topic __consumer_offsets
> >> > --partition 0 --broker-list localhost:9092 --formatter
> >> > "kafka.server.OffsetManager\$OffsetsMessageFormatter"
> --consumer.config
> >> > kafka_offset_consumer.config
> >> >
> >> >
> >> > Neither worked - it just sits there but does not print anything, even
> >> > though the consumers are actively consuming/saving offsets.
> >> >
> >> > Am I missing some other configuration/properties ?
> >> >
> >> > thanks!
> >> >
> >> > Marina
> >> >
> >> > I have also posted this question on StackOverflow:
> >> >
> >> >
> >> >
> 

Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?

2015-12-03 Thread Guozhang Wang
If you can validate these partitions have data (i.e. there are some offsets
committed to Kafka), then you may have to turn of debug level logging in
configs/tools-log4j.properties which will allow console consumer to print
debug level logs and see if there is anything suspicious.

Guozhang

On Thu, Dec 3, 2015 at 10:55 AM, Marina  wrote:

> Hi, Guozhang,
> Yes, I can see this topic and partitions in ZK:
>
> ls /brokers/topics/__consumer_offsets
> [partitions]
> ls /brokers/topics/__consumer_offsets/partitions
> [44, 45, 46, 47, 48, 49, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 0, 1, 2,
> 3, 4, 5, 6, 7, 8, 9, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
> 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43]
>
>
> example of one partition state:
> ls /brokers/topics/__consumer_offsets/partitions/44
> [state]
> ls /brokers/topics/__consumer_offsets/partitions/44/state
> []
>
>
> anything else you think I could check?
>
> thanks!
> Marina
>
>
>
> - Original Message -
> From: Guozhang Wang 
> To: "users@kafka.apache.org" ; Marina <
> ppi...@yahoo.com>
> Sent: Thursday, December 3, 2015 12:05 PM
> Subject: Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?
>
> Marina,
>
> To check if the topic does exist in Kafka (i.e. offsets are stored in Kafka
> instead of in ZK) you can check this path in ZK:
>
> /brokers/topics/__consumer_offsets
>
> By default this topic should have 50 partitions.
>
> Guozhang
>
>
>
> On Thu, Dec 3, 2015 at 6:22 AM, Marina  wrote:
>
> > Hi, Jason,
> >
> > I tried the same command both with specifying a formatter and without -
> > same result:
> >
> > => /opt/kafka/bin/kafka-console-consumer.sh --formatter
> > kafka.server.OffsetManager\$OffsetsMessageFormatter --consumer.config
> > /tmp/consumer.properties --topic __consumer_offsets --zookeeper
> > localhost:2181 --from-beginning
> >
> > ^CConsumed 0 messages
> >
> >
> > When you are saying to make sure Consumers are using Kafka instead of ZK
> > storage for offset... - what is the best way to do that? I thought
> checking
> > the content via ZK shell - at this path:
> > /consumers//offsets//
> >
> > would be a definitive confirmation (there are no values stored there). Is
> > it true?
> >
> > I feel it is some small stupid mistake I'm making, since it seems to work
> > fine for others - drives me crazy :)
> >
> > thanks!!
> > Marina
> >
> >
> >
> > - Original Message -
> > From: Jason Gustafson 
> > To: users@kafka.apache.org
> > Sent: Wednesday, December 2, 2015 2:33 PM
> > Subject: Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?
> >
> > Hey Marina,
> >
> > My mistake, I see you're using 0.8.2.1. Are you also providing the
> > formatter argument when using console-consumer.sh? Perhaps something like
> > this:
> >
> > bin/kafka-console-consumer.sh --formatter
> > kafka.server.OffsetManager\$OffsetsMessageFormatter --zookeeper
> > localhost:2181 --topic __consumer_offsets --from-beginning
> >
> > You may also want to confirm that your consumers are using Kafka instead
> of
> > Zookeeper for offset storage. If you still don't see anything, we can
> > always look into the partition data directly...
> >
> > -Jason
> >
> >
> > On Wed, Dec 2, 2015 at 11:13 AM, Jason Gustafson 
> > wrote:
> >
> > > Looks like you need to use a different MessageFormatter class, since it
> > > was renamed in 0.9. Instead use something like
> > > "kafka.coordinator.GroupMetadataManager\$OffsetsMessageFormatter".
> > >
> > > -Jason
> > >
> > > On Wed, Dec 2, 2015 at 10:57 AM, Dhyan Muralidharan <
> > > d.muralidha...@yottaa.com> wrote:
> > >
> > >> I have this same problem . Can someone help ?
> > >>
> > >> --Dhyan
> > >>
> > >> On Wed, Nov 25, 2015 at 3:31 PM, Marina 
> > wrote:
> > >>
> > >> > Hello,
> > >> >
> > >> > I'm trying to find out which offsets my current High-Level consumers
> > are
> > >> > working off. I use Kafka 0.8.2.1, with **no** "offset.storage" set
> in
> > >> the
> > >> > server.properties of Kafka - which, I think, means that offsets are
> > >> stored
> > >> > in Kafka. (I also verified that no offsets are stored in Zookeeper
> by
> > >> > checking this path in the Zk shell:
> > >> >
> > >>
> >
> **/consumers//offsets//**
> > >> > )
> > >> >
> > >> > I tried to listen to the **__consumer_offsets** topic to see which
> > >> > consumer saves what value of offsets, but it did not work...
> > >> >
> > >> > I tried the following:
> > >> >
> > >> > created a config file for console consumer as following:
> > >> >
> > >> >
> > >> > => more kafka_offset_consumer.config
> > >> >
> > >> >  exclude.internal.topics=false`
> > >> >
> > >> > and tried to versions of the console consumer scripts:
> > >> >
> > >> > #1:
> > >> > bin/kafka-console-consumer.sh --consumer.config
> > >> > kafka_offset_consumer.config --topic __consumer_offsets 

Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?

2015-12-03 Thread Lance Laursen
Hi Marina,

You can hop onto your brokers and dump your __consumer_offsets logs
manually in order to see if anything is in them. Hop on each of your
brokers and run the following command:

  for f in $(find /path/to/kafka-logs/__consumer_offsets-* -name "*\.log");
do /usr/local/kafka/bin/kafka-run-class.sh kafka.tools.DumpLogSegments
--print-data-log --files $f ; done

Remember to replace /path/to/kafka-logs with your configured log.dirs
location, and /usr/local/kafka/ with wherever you have installed kafka.

If that command doesn't generate any output (Starting offset: 0 followed by
nothing) on any of your brokers, then your consumers are for some reason
not committing offsets to Kafka.

Keep in mind that commits to __consumer_offsets by the consumer API use
groupID, Topic, and PartitionID to key off of in order to determine which
partition of __consumer_offsets to save the commit message to. This means
that if you're not running many consumers/topics, you could have partitions
with nothing in them and others with lots. Keep this in mind when using
-simple-consumer scripts as the simple consumer only pulls from one
partition at a time.

On Thu, Dec 3, 2015 at 11:10 AM, Guozhang Wang  wrote:

> If you can validate these partitions have data (i.e. there are some offsets
> committed to Kafka), then you may have to turn of debug level logging in
> configs/tools-log4j.properties which will allow console consumer to print
> debug level logs and see if there is anything suspicious.
>
> Guozhang
>
> On Thu, Dec 3, 2015 at 10:55 AM, Marina  wrote:
>
> > Hi, Guozhang,
> > Yes, I can see this topic and partitions in ZK:
> >
> > ls /brokers/topics/__consumer_offsets
> > [partitions]
> > ls /brokers/topics/__consumer_offsets/partitions
> > [44, 45, 46, 47, 48, 49, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 0, 1, 2,
> > 3, 4, 5, 6, 7, 8, 9, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
> > 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43]
> >
> >
> > example of one partition state:
> > ls /brokers/topics/__consumer_offsets/partitions/44
> > [state]
> > ls /brokers/topics/__consumer_offsets/partitions/44/state
> > []
> >
> >
> > anything else you think I could check?
> >
> > thanks!
> > Marina
> >
> >
> >
> > - Original Message -
> > From: Guozhang Wang 
> > To: "users@kafka.apache.org" ; Marina <
> > ppi...@yahoo.com>
> > Sent: Thursday, December 3, 2015 12:05 PM
> > Subject: Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?
> >
> > Marina,
> >
> > To check if the topic does exist in Kafka (i.e. offsets are stored in
> Kafka
> > instead of in ZK) you can check this path in ZK:
> >
> > /brokers/topics/__consumer_offsets
> >
> > By default this topic should have 50 partitions.
> >
> > Guozhang
> >
> >
> >
> > On Thu, Dec 3, 2015 at 6:22 AM, Marina  wrote:
> >
> > > Hi, Jason,
> > >
> > > I tried the same command both with specifying a formatter and without -
> > > same result:
> > >
> > > => /opt/kafka/bin/kafka-console-consumer.sh --formatter
> > > kafka.server.OffsetManager\$OffsetsMessageFormatter --consumer.config
> > > /tmp/consumer.properties --topic __consumer_offsets --zookeeper
> > > localhost:2181 --from-beginning
> > >
> > > ^CConsumed 0 messages
> > >
> > >
> > > When you are saying to make sure Consumers are using Kafka instead of
> ZK
> > > storage for offset... - what is the best way to do that? I thought
> > checking
> > > the content via ZK shell - at this path:
> > >
> /consumers//offsets//
> > >
> > > would be a definitive confirmation (there are no values stored there).
> Is
> > > it true?
> > >
> > > I feel it is some small stupid mistake I'm making, since it seems to
> work
> > > fine for others - drives me crazy :)
> > >
> > > thanks!!
> > > Marina
> > >
> > >
> > >
> > > - Original Message -
> > > From: Jason Gustafson 
> > > To: users@kafka.apache.org
> > > Sent: Wednesday, December 2, 2015 2:33 PM
> > > Subject: Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?
> > >
> > > Hey Marina,
> > >
> > > My mistake, I see you're using 0.8.2.1. Are you also providing the
> > > formatter argument when using console-consumer.sh? Perhaps something
> like
> > > this:
> > >
> > > bin/kafka-console-consumer.sh --formatter
> > > kafka.server.OffsetManager\$OffsetsMessageFormatter --zookeeper
> > > localhost:2181 --topic __consumer_offsets --from-beginning
> > >
> > > You may also want to confirm that your consumers are using Kafka
> instead
> > of
> > > Zookeeper for offset storage. If you still don't see anything, we can
> > > always look into the partition data directly...
> > >
> > > -Jason
> > >
> > >
> > > On Wed, Dec 2, 2015 at 11:13 AM, Jason Gustafson 
> > > wrote:
> > >
> > > > Looks like you need to use a different MessageFormatter class, since
> it
> > > > was renamed in 0.9. Instead use something like
> > > 

Failed attempt to delete topic

2015-12-03 Thread Rakesh Vidyadharan
Hello,

We are on an older kafka (0.8.1) version.  While a number of consumers were 
running, we attempted to delete a few topics using the kafka-topics.sh file 
(basically want to remove all messages in that topic and restart, since our 
entities went through some incompatible changes).  We noticed logs saying the 
topic has been queued for deletion.  After stopping all processes accessing 
kafka, we restarted kafka and then our processes.  The old topics do not seem 
to have been deleted (I can still see the log directories corresponding to the 
topics), and none of the clients are able to either publish or read to the 
topics that we attempted to delete.  Attempting to read gives us the following 
type of error:

Attempting to access an invalid KafkaTopic ( are you operating on a closed 
KafkaTopic ?)

Attempting to publish gives us a more general type of error:

kafka.common.FailedToSendMessageException: Failed to send messages after 3 
tries.
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
at kafka.producer.Producer.send(Producer.scala:76)

How can be get around this issue and start using the topics that we tried to 
clean up?  There may have been better ways to achieve what we wanted, if so 
please suggest recommendations as well.

Thanks
Rakesh



Managing replication on specific interfaces

2015-12-03 Thread scott macfawn
A little background:

I have a decent sized kafka cluster. Each of the data nodes has two NICs
with seperate IPs. We are finding that the distribution of network traffic
is not balancing between the two . Is there is a way to make it so that
producers write to kafka on one interface and replication takes place on
the other?

/Scott


Re: Trying to understand the format of the LogSegment file.

2015-12-03 Thread Magnus Edenhill
Hi,

messages are stored on disk in the Kafka (network) protocol format, so if
you have a look at the protocol guide you'll see the pieces start coming
together:
https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-Messagesets

Regards,
Magnus



2015-12-03 18:18 GMT+01:00 Steve Graham :

> I am attempting to understand the details of the content of the log
> segment file in Kafka.
>
> The documentation (http://kafka.apache.org/081/documentation.html#log)
> suggests:
> The exact binary format for messages is versioned and maintained as a
> standard interface so message sets can be transfered between producer,
> broker, and client without recopying or conversion when desirable. This
> format is as follows:
>
> On-disk format of a message
>
> message length : 4 bytes (value: 1+4+n)
> "magic" value  : 1 byte
> crc: 4 bytes
> payload: n bytes
>
>
> But I am struggling to map the documentation to what I see on the disk.
>
> I created a topic, named simple-topic, and added one message to it (via
> the console producer).  The message payload was “message1”.
>
> The DumpLogSegments tool shows:
> Dumping /tmp/kafka-logs/sample-topic-0/.log
> Starting offset: 0
> offset: 0 position: 0 isvalid: true payloadsize: 8 magic: 0 compresscodec:
> NoCompressionCodec crc: 3916773564
>
> Taking a hex dump of the (only) log file:
> sample-topic-0 sgg$ hexdump -C .log | more
>   00 00 00 00 00 00 00 00  00 00 00 16 e9 75 38 bc
> |.u8.|
> 0010  00 00 ff ff ff ff 00 00  00 08 6d 65 73 73 61 67
> |..messag|
> 0020  65 31 |e1|
> 0022
>
> I tried to “reverse engineer” the contents, to see how it corresponds to
> the documentation:
>
> Bytes 0-7 (00 00 00 00 00 00 00 00).  I am not sure what this is, some
> sort of filler?
> Bytes 8-11 (00 00 00 16) seems to be some length field?  Decimal 22, which
> seems to correspond to the length of the entire message, but more than
> 1+4+n than suggested by the documentation
> Bytes 12-15 (e9 75 38 bc) this corresponds to the CRC (decimal
> 3916773564).  No problem here.
> Bytes 16-17 (00 00) not sure what this is.
> Bytes 18-21 (ff ff ff ff) not sure what this is. A “magic number”?  But
> that should be just one byte.  Must be something else?
> Bytes 22-25 (00 00  00 08) is the message payload size (8), this is the
> value of “n” in the formula for message length, exactly the length of the
> “message1” string. No problem here.
> Bytes 26-33 (6d 65 73 73 61 67 65 31) is the payload (ascii: message1).
> No problem here.
>
> Can anyone on the list help me reconcile the documentation to what I see
> on the disk?  Specifically:
> a) what are the first 8 bytes supposed to represent?
> b) the message length field as described as 1+4+n doesn’t correspond with
> what I see on disk.  It looks like 4 (crc) + 2 (??) + 4 (?magic number?) +
> 4 (payload length) + 8 (n).  What is the correct formula?
> c) why does the CRC appear so early in the message (bytes 8-11), shouldn’t
> the magic value appear before the CRC?
> d) what is the way to interpret bytes 16-21?  is the magic number in here
> somewhere?  What else is in this set of bytes?
>
> Thanks
> sgg
>
>


Re: New consumer not fetching as quickly as possible

2015-12-03 Thread Guozhang Wang
Good to know. Thanks Tao.

On Wed, Dec 2, 2015 at 5:42 PM, tao xiao  wrote:

> It does help with increasing the poll timeout to Long.MAX_VALUE. I got
> messages in every poll but just the time between each poll is long. That is
> how I discovered it was an network issue btw consumer and broker.  I
> believe it will have the same effect as long as I set the poll timeout high
> enough, not necessary to be Long.MAX_VALUE.
>
> On Thu, 3 Dec 2015 at 02:04 Guozhang Wang  wrote:
>
> > Thanks for the updates Tao.
> >
> > Just wanted to make sure that there is no other potential issues when
> > consumer and broker are remote, which is also quite common in practice:
> if
> > you increase the timeout value in poll(timeout) to even larger values
> (say
> > two times the average latency in your network) and also set the
> > request.timeout.ms config to be large enough as well, does that resolve
> > the
> > issue even if your consumer is not co-located?
> >
> > Guozhang
> >
> > On Wed, Dec 2, 2015 at 12:46 AM, tao xiao  wrote:
> >
> > > It turned out it was due to network latency btw consumer and broker.
> > Once
> > > I moved the consumer to the same box of broker messages were returned
> in
> > > every poll.
> > >
> > > Thanks for all the helps.
> > >
> > > On Wed, 2 Dec 2015 at 15:38 Gerard Klijs 
> > wrote:
> > >
> > > > Another possible reason witch comes to me mind is that you have
> > multiple
> > > > consumer threads, but not the partitions/brokers to support them.
> When
> > > I'm
> > > > running my tool on multiple threads I get a lot of time-outs. When I
> > only
> > > > use one consumer thread I get them only at the start and the end.
> > > >
> > > > On Wed, Dec 2, 2015 at 5:43 AM Jason Gustafson 
> > > wrote:
> > > >
> > > > > There is some initial overhead before data can be fetched. For
> > example,
> > > > the
> > > > > group has to be joined and topic metadata has to be fetched. Do you
> > see
> > > > > unexpected empty fetches beyond the first 10 polls?
> > > > >
> > > > > Thanks,
> > > > > Jason
> > > > >
> > > > > On Tue, Dec 1, 2015 at 7:43 PM, tao xiao 
> > wrote:
> > > > >
> > > > > > Hi Jason,
> > > > > >
> > > > > > You are correct. I initially produced 1 messages in Kafka
> > before
> > > I
> > > > > > started up my consumer with auto.offset.reset=earliest. But like
> I
> > > said
> > > > > the
> > > > > > majority number of first 10 polls returned 0 message and the lag
> > > > remained
> > > > > > above 0 which means I still have enough messages to consume.
> BTW I
> > > > > commit
> > > > > > offset manually so the lag should accurately reflect how many
> > > messages
> > > > > > remaining.
> > > > > >
> > > > > > I will turn on debug logging and test again.
> > > > > >
> > > > > > On Wed, 2 Dec 2015 at 07:17 Jason Gustafson 
> > > > wrote:
> > > > > >
> > > > > > > Hey Tao, other than high latency between the brokers and the
> > > > consumer,
> > > > > > I'm
> > > > > > > not sure what would cause this. Can you turn on debug logging
> and
> > > run
> > > > > > > again? I'm looking for any connection problems or
> metadata/fetch
> > > > > request
> > > > > > > errors. And I have to ask a dumb question, how do you know that
> > > more
> > > > > > > messages are available? Are you monitoring the consumer's lag?
> > > > > > >
> > > > > > > -Jason
> > > > > > >
> > > > > > > On Tue, Dec 1, 2015 at 10:07 AM, Gerard Klijs <
> > > > gerard.kl...@dizzit.com
> > > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Thanks Tao, it worked.
> > > > > > > > I also played around with my test setting trying to replicate
> > > your
> > > > > > > results,
> > > > > > > > using default settings. But als long as the poll timeout is
> set
> > > to
> > > > > > 100ms
> > > > > > > or
> > > > > > > > larger the only time-out I get are near the start and near
> the
> > > end
> > > > > > (when
> > > > > > > > indeed there is nothing to consume). This is with a producer
> > > > putting
> > > > > > out
> > > > > > > > 1000 messages a second. Maybe the load of the producer your
> > using
> > > > is
> > > > > > not
> > > > > > > > constant? Maybe you could run a test with the
> > > > > > > > org.apache.kafka.tools.ProducerPerformance class to see if it
> > > > makes a
> > > > > > > > difference?
> > > > > > > >
> > > > > > > > On Tue, Dec 1, 2015 at 11:35 AM tao xiao <
> xiaotao...@gmail.com
> > >
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Gerard,
> > > > > > > > >
> > > > > > > > > In your case I think you can set fetch.min.bytes=1 so that
> > the
> > > > > server
> > > > > > > > will
> > > > > > > > > answer the fetch request as soon as a single byte of data
> is
> > > > > > available
> > > > > > > > > instead of accumulating enough messages.
> > > > > > > > >
> > > > > > > > > But in my case is I have plenty of messages in broker and I
> > am
> > > > 

Re: Failed attempt to delete topic

2015-12-03 Thread Stevo Slavić
Delete was actually considered to be working since Kafka 0.8.2 (although
there are still not easily reproducible edge cases when it doesn't work
well even in in 0.8.2 or newer).
In 0.8.1 one could request topic to be deleted (request gets stored as
entry in ZooKeeper), because of presence of the request for topic to be
deleted topic would become unusable (cannot publish or read), but broker
would actually never (work on the request to) delete topic.

Maybe it will be enough to delete from ZooKeeper entry for the topic
deletion request under /admin/delete_topics to have topic usable again.

Otherwise, just upgrade broker side to 0.8.2.x or latest 0.9.0.0 - new
broker should work with old clients so maybe you don't have to upgrade
client side immediately.

Kind regards,
Stevo Slavic.


On Fri, Dec 4, 2015 at 12:33 AM, Mayuresh Gharat  wrote:

> Can you paste some logs from the controller, when you deleted the topic?
>
> Thanks,
>
> Mayuresh
>
> On Thu, Dec 3, 2015 at 2:30 PM, Rakesh Vidyadharan <
> rvidyadha...@gracenote.com> wrote:
>
> > Hello,
> >
> > We are on an older kafka (0.8.1) version.  While a number of consumers
> > were running, we attempted to delete a few topics using the
> kafka-topics.sh
> > file (basically want to remove all messages in that topic and restart,
> > since our entities went through some incompatible changes).  We noticed
> > logs saying the topic has been queued for deletion.  After stopping all
> > processes accessing kafka, we restarted kafka and then our processes.
> The
> > old topics do not seem to have been deleted (I can still see the log
> > directories corresponding to the topics), and none of the clients are
> able
> > to either publish or read to the topics that we attempted to delete.
> > Attempting to read gives us the following type of error:
> >
> > Attempting to access an invalid KafkaTopic ( are you operating on a
> closed
> > KafkaTopic ?)
> >
> > Attempting to publish gives us a more general type of error:
> >
> > kafka.common.FailedToSendMessageException: Failed to send messages after
> 3
> > tries.
> > at
> >
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
> > at kafka.producer.Producer.send(Producer.scala:76)
> >
> > How can be get around this issue and start using the topics that we tried
> > to clean up?  There may have been better ways to achieve what we wanted,
> if
> > so please suggest recommendations as well.
> >
> > Thanks
> > Rakesh
> >
> >
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>


Re: Failed attempt to delete topic

2015-12-03 Thread Rakesh Vidyadharan
Hi Mayuresh

These are some of the relevant logs that I could find

[2015-12-03 16:04:23,594] INFO Loading log 'merckx.raw.event.type-0' 
(kafka.log.LogManager)
[2015-12-03 16:04:23,595] INFO Completed load of log merckx.raw.event.type-0 
with log end offset 2 (kafka.log.Log)
[2015-12-03 16:04:23,641] INFO Loading log 'merckx.raw.event-0' 
(kafka.log.LogManager)
[2015-12-03 16:04:23,643] INFO Completed load of log merckx.raw.event-0 with 
log end offset 25149 (kafka.log.Log)
[2015-12-03 16:04:25,208] INFO [ReplicaFetcherManager on broker 0] Removed 
fetcher for partitions 
[merckx.raw.venue.capacity,0],[merckx.raw.sport.franchise,0],[merckx.raw.college,0],[merckx.raw.venue.formerName,0],[merckx.raw.venue,0],[merckx.raw.event,0],[merckx.raw.organization,0],[merckx.raw.sport.franchise.season,0],[merckx.raw.organization.season,0],[merckx.raw.sport,0],[merckx.raw.event.type,0]
 (kafka.server.ReplicaFetcherManager)
[2015-12-03 16:04:25,279] INFO Truncating log merckx.raw.event-0 to offset 
25149. (kafka.log.Log)
[2015-12-03 16:04:25,280] INFO Truncating log merckx.raw.event.type-0 to offset 
2. (kafka.log.Log)
[2015-12-03 16:09:04,709] ERROR [KafkaApi-0] Error while fetching metadata for 
partition [merckx.raw.event,0] (kafka.server.KafkaApis)
kafka.common.LeaderNotAvailableException: Leader not available for partition 
[merckx.raw.event,0]
[2015-12-03 16:09:05,926] ERROR [KafkaApi-0] Error while fetching metadata for 
partition [merckx.raw.event,0] (kafka.server.KafkaApis)


Thanks

Rakesh




On 03/12/2015 17:33, "Mayuresh Gharat"  wrote:

>Can you paste some logs from the controller, when you deleted the topic?
>
>Thanks,
>
>Mayuresh
>
>On Thu, Dec 3, 2015 at 2:30 PM, Rakesh Vidyadharan <
>rvidyadha...@gracenote.com> wrote:
>
>> Hello,
>>
>> We are on an older kafka (0.8.1) version.  While a number of consumers
>> were running, we attempted to delete a few topics using the kafka-topics.sh
>> file (basically want to remove all messages in that topic and restart,
>> since our entities went through some incompatible changes).  We noticed
>> logs saying the topic has been queued for deletion.  After stopping all
>> processes accessing kafka, we restarted kafka and then our processes.  The
>> old topics do not seem to have been deleted (I can still see the log
>> directories corresponding to the topics), and none of the clients are able
>> to either publish or read to the topics that we attempted to delete.
>> Attempting to read gives us the following type of error:
>>
>> Attempting to access an invalid KafkaTopic ( are you operating on a closed
>> KafkaTopic ?)
>>
>> Attempting to publish gives us a more general type of error:
>>
>> kafka.common.FailedToSendMessageException: Failed to send messages after 3
>> tries.
>> at
>> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
>> at kafka.producer.Producer.send(Producer.scala:76)
>>
>> How can be get around this issue and start using the topics that we tried
>> to clean up?  There may have been better ways to achieve what we wanted, if
>> so please suggest recommendations as well.
>>
>> Thanks
>> Rakesh
>>
>>
>
>
>-- 
>-Regards,
>Mayuresh R. Gharat
>(862) 250-7125


Re: Failed attempt to delete topic

2015-12-03 Thread Mayuresh Gharat
you can use the zookeeper shell inside the bin directory for that.

Thanks,

Mayuresh

On Thu, Dec 3, 2015 at 4:04 PM, Rakesh Vidyadharan <
rvidyadha...@gracenote.com> wrote:

> Thanks Stevo.  I did see some messages related to /admin/delete_topics.
> Will do some research on how I can clean up zookeeper.
>
> Thanks
> Rakesh
>
>
>
>
> On 03/12/2015 17:55, "Stevo Slavić"  wrote:
>
> >Delete was actually considered to be working since Kafka 0.8.2 (although
> >there are still not easily reproducible edge cases when it doesn't work
> >well even in in 0.8.2 or newer).
> >In 0.8.1 one could request topic to be deleted (request gets stored as
> >entry in ZooKeeper), because of presence of the request for topic to be
> >deleted topic would become unusable (cannot publish or read), but broker
> >would actually never (work on the request to) delete topic.
> >
> >Maybe it will be enough to delete from ZooKeeper entry for the topic
> >deletion request under /admin/delete_topics to have topic usable again.
> >
> >Otherwise, just upgrade broker side to 0.8.2.x or latest 0.9.0.0 - new
> >broker should work with old clients so maybe you don't have to upgrade
> >client side immediately.
> >
> >Kind regards,
> >Stevo Slavic.
> >
> >
> >On Fri, Dec 4, 2015 at 12:33 AM, Mayuresh Gharat <
> gharatmayures...@gmail.com
> >> wrote:
> >
> >> Can you paste some logs from the controller, when you deleted the topic?
> >>
> >> Thanks,
> >>
> >> Mayuresh
> >>
> >> On Thu, Dec 3, 2015 at 2:30 PM, Rakesh Vidyadharan <
> >> rvidyadha...@gracenote.com> wrote:
> >>
> >> > Hello,
> >> >
> >> > We are on an older kafka (0.8.1) version.  While a number of consumers
> >> > were running, we attempted to delete a few topics using the
> >> kafka-topics.sh
> >> > file (basically want to remove all messages in that topic and restart,
> >> > since our entities went through some incompatible changes).  We
> noticed
> >> > logs saying the topic has been queued for deletion.  After stopping
> all
> >> > processes accessing kafka, we restarted kafka and then our processes.
> >> The
> >> > old topics do not seem to have been deleted (I can still see the log
> >> > directories corresponding to the topics), and none of the clients are
> >> able
> >> > to either publish or read to the topics that we attempted to delete.
> >> > Attempting to read gives us the following type of error:
> >> >
> >> > Attempting to access an invalid KafkaTopic ( are you operating on a
> >> closed
> >> > KafkaTopic ?)
> >> >
> >> > Attempting to publish gives us a more general type of error:
> >> >
> >> > kafka.common.FailedToSendMessageException: Failed to send messages
> after
> >> 3
> >> > tries.
> >> > at
> >> >
> >>
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
> >> > at kafka.producer.Producer.send(Producer.scala:76)
> >> >
> >> > How can be get around this issue and start using the topics that we
> tried
> >> > to clean up?  There may have been better ways to achieve what we
> wanted,
> >> if
> >> > so please suggest recommendations as well.
> >> >
> >> > Thanks
> >> > Rakesh
> >> >
> >> >
> >>
> >>
> >> --
> >> -Regards,
> >> Mayuresh R. Gharat
> >> (862) 250-7125
> >>
>



-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: Failed attempt to delete topic

2015-12-03 Thread Rakesh Vidyadharan
Thanks Mayuresh.  I was able to use the shell to delete the entries and things 
are working fine now.




On 03/12/2015 18:22, "Mayuresh Gharat"  wrote:

>you can use the zookeeper shell inside the bin directory for that.
>
>Thanks,
>
>Mayuresh
>
>On Thu, Dec 3, 2015 at 4:04 PM, Rakesh Vidyadharan <
>rvidyadha...@gracenote.com> wrote:
>
>> Thanks Stevo.  I did see some messages related to /admin/delete_topics.
>> Will do some research on how I can clean up zookeeper.
>>
>> Thanks
>> Rakesh
>>
>>
>>
>>
>> On 03/12/2015 17:55, "Stevo Slavić"  wrote:
>>
>> >Delete was actually considered to be working since Kafka 0.8.2 (although
>> >there are still not easily reproducible edge cases when it doesn't work
>> >well even in in 0.8.2 or newer).
>> >In 0.8.1 one could request topic to be deleted (request gets stored as
>> >entry in ZooKeeper), because of presence of the request for topic to be
>> >deleted topic would become unusable (cannot publish or read), but broker
>> >would actually never (work on the request to) delete topic.
>> >
>> >Maybe it will be enough to delete from ZooKeeper entry for the topic
>> >deletion request under /admin/delete_topics to have topic usable again.
>> >
>> >Otherwise, just upgrade broker side to 0.8.2.x or latest 0.9.0.0 - new
>> >broker should work with old clients so maybe you don't have to upgrade
>> >client side immediately.
>> >
>> >Kind regards,
>> >Stevo Slavic.
>> >
>> >
>> >On Fri, Dec 4, 2015 at 12:33 AM, Mayuresh Gharat <
>> gharatmayures...@gmail.com
>> >> wrote:
>> >
>> >> Can you paste some logs from the controller, when you deleted the topic?
>> >>
>> >> Thanks,
>> >>
>> >> Mayuresh
>> >>
>> >> On Thu, Dec 3, 2015 at 2:30 PM, Rakesh Vidyadharan <
>> >> rvidyadha...@gracenote.com> wrote:
>> >>
>> >> > Hello,
>> >> >
>> >> > We are on an older kafka (0.8.1) version.  While a number of consumers
>> >> > were running, we attempted to delete a few topics using the
>> >> kafka-topics.sh
>> >> > file (basically want to remove all messages in that topic and restart,
>> >> > since our entities went through some incompatible changes).  We
>> noticed
>> >> > logs saying the topic has been queued for deletion.  After stopping
>> all
>> >> > processes accessing kafka, we restarted kafka and then our processes.
>> >> The
>> >> > old topics do not seem to have been deleted (I can still see the log
>> >> > directories corresponding to the topics), and none of the clients are
>> >> able
>> >> > to either publish or read to the topics that we attempted to delete.
>> >> > Attempting to read gives us the following type of error:
>> >> >
>> >> > Attempting to access an invalid KafkaTopic ( are you operating on a
>> >> closed
>> >> > KafkaTopic ?)
>> >> >
>> >> > Attempting to publish gives us a more general type of error:
>> >> >
>> >> > kafka.common.FailedToSendMessageException: Failed to send messages
>> after
>> >> 3
>> >> > tries.
>> >> > at
>> >> >
>> >>
>> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
>> >> > at kafka.producer.Producer.send(Producer.scala:76)
>> >> >
>> >> > How can be get around this issue and start using the topics that we
>> tried
>> >> > to clean up?  There may have been better ways to achieve what we
>> wanted,
>> >> if
>> >> > so please suggest recommendations as well.
>> >> >
>> >> > Thanks
>> >> > Rakesh
>> >> >
>> >> >
>> >>
>> >>
>> >> --
>> >> -Regards,
>> >> Mayuresh R. Gharat
>> >> (862) 250-7125
>> >>
>>
>
>
>
>-- 
>-Regards,
>Mayuresh R. Gharat
>(862) 250-7125


Re: Failed attempt to delete topic

2015-12-03 Thread Mayuresh Gharat
Can you paste some logs from the controller, when you deleted the topic?

Thanks,

Mayuresh

On Thu, Dec 3, 2015 at 2:30 PM, Rakesh Vidyadharan <
rvidyadha...@gracenote.com> wrote:

> Hello,
>
> We are on an older kafka (0.8.1) version.  While a number of consumers
> were running, we attempted to delete a few topics using the kafka-topics.sh
> file (basically want to remove all messages in that topic and restart,
> since our entities went through some incompatible changes).  We noticed
> logs saying the topic has been queued for deletion.  After stopping all
> processes accessing kafka, we restarted kafka and then our processes.  The
> old topics do not seem to have been deleted (I can still see the log
> directories corresponding to the topics), and none of the clients are able
> to either publish or read to the topics that we attempted to delete.
> Attempting to read gives us the following type of error:
>
> Attempting to access an invalid KafkaTopic ( are you operating on a closed
> KafkaTopic ?)
>
> Attempting to publish gives us a more general type of error:
>
> kafka.common.FailedToSendMessageException: Failed to send messages after 3
> tries.
> at
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
> at kafka.producer.Producer.send(Producer.scala:76)
>
> How can be get around this issue and start using the topics that we tried
> to clean up?  There may have been better ways to achieve what we wanted, if
> so please suggest recommendations as well.
>
> Thanks
> Rakesh
>
>


-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Re: Failed attempt to delete topic

2015-12-03 Thread Steve Robenalt
Hi Rakesh,

Topic deletion didn't really work properly until 0.8.2. Here's a
stackoverflow link that summarizes how to work around this limitation:

http://stackoverflow.com/questions/24287900/delete-topic-in-kafka-0-8-1-1

HTH,
Steve

On Thu, Dec 3, 2015 at 3:33 PM, Mayuresh Gharat 
wrote:

> Can you paste some logs from the controller, when you deleted the topic?
>
> Thanks,
>
> Mayuresh
>
> On Thu, Dec 3, 2015 at 2:30 PM, Rakesh Vidyadharan <
> rvidyadha...@gracenote.com> wrote:
>
> > Hello,
> >
> > We are on an older kafka (0.8.1) version.  While a number of consumers
> > were running, we attempted to delete a few topics using the
> kafka-topics.sh
> > file (basically want to remove all messages in that topic and restart,
> > since our entities went through some incompatible changes).  We noticed
> > logs saying the topic has been queued for deletion.  After stopping all
> > processes accessing kafka, we restarted kafka and then our processes.
> The
> > old topics do not seem to have been deleted (I can still see the log
> > directories corresponding to the topics), and none of the clients are
> able
> > to either publish or read to the topics that we attempted to delete.
> > Attempting to read gives us the following type of error:
> >
> > Attempting to access an invalid KafkaTopic ( are you operating on a
> closed
> > KafkaTopic ?)
> >
> > Attempting to publish gives us a more general type of error:
> >
> > kafka.common.FailedToSendMessageException: Failed to send messages after
> 3
> > tries.
> > at
> >
> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
> > at kafka.producer.Producer.send(Producer.scala:76)
> >
> > How can be get around this issue and start using the topics that we tried
> > to clean up?  There may have been better ways to achieve what we wanted,
> if
> > so please suggest recommendations as well.
> >
> > Thanks
> > Rakesh
> >
> >
>
>
> --
> -Regards,
> Mayuresh R. Gharat
> (862) 250-7125
>



-- 
Steve Robenalt
Software Architect
sroben...@highwire.org 
(office/cell): 916-505-1785

HighWire Press, Inc.
425 Broadway St, Redwood City, CA 94063
www.highwire.org

Technology for Scholarly Communication


Re: Failed attempt to delete topic

2015-12-03 Thread Rakesh Vidyadharan
Thanks Stevo.  I did see some messages related to /admin/delete_topics.  Will 
do some research on how I can clean up zookeeper.

Thanks
Rakesh




On 03/12/2015 17:55, "Stevo Slavić"  wrote:

>Delete was actually considered to be working since Kafka 0.8.2 (although
>there are still not easily reproducible edge cases when it doesn't work
>well even in in 0.8.2 or newer).
>In 0.8.1 one could request topic to be deleted (request gets stored as
>entry in ZooKeeper), because of presence of the request for topic to be
>deleted topic would become unusable (cannot publish or read), but broker
>would actually never (work on the request to) delete topic.
>
>Maybe it will be enough to delete from ZooKeeper entry for the topic
>deletion request under /admin/delete_topics to have topic usable again.
>
>Otherwise, just upgrade broker side to 0.8.2.x or latest 0.9.0.0 - new
>broker should work with old clients so maybe you don't have to upgrade
>client side immediately.
>
>Kind regards,
>Stevo Slavic.
>
>
>On Fri, Dec 4, 2015 at 12:33 AM, Mayuresh Gharat > wrote:
>
>> Can you paste some logs from the controller, when you deleted the topic?
>>
>> Thanks,
>>
>> Mayuresh
>>
>> On Thu, Dec 3, 2015 at 2:30 PM, Rakesh Vidyadharan <
>> rvidyadha...@gracenote.com> wrote:
>>
>> > Hello,
>> >
>> > We are on an older kafka (0.8.1) version.  While a number of consumers
>> > were running, we attempted to delete a few topics using the
>> kafka-topics.sh
>> > file (basically want to remove all messages in that topic and restart,
>> > since our entities went through some incompatible changes).  We noticed
>> > logs saying the topic has been queued for deletion.  After stopping all
>> > processes accessing kafka, we restarted kafka and then our processes.
>> The
>> > old topics do not seem to have been deleted (I can still see the log
>> > directories corresponding to the topics), and none of the clients are
>> able
>> > to either publish or read to the topics that we attempted to delete.
>> > Attempting to read gives us the following type of error:
>> >
>> > Attempting to access an invalid KafkaTopic ( are you operating on a
>> closed
>> > KafkaTopic ?)
>> >
>> > Attempting to publish gives us a more general type of error:
>> >
>> > kafka.common.FailedToSendMessageException: Failed to send messages after
>> 3
>> > tries.
>> > at
>> >
>> kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:90)
>> > at kafka.producer.Producer.send(Producer.scala:76)
>> >
>> > How can be get around this issue and start using the topics that we tried
>> > to clean up?  There may have been better ways to achieve what we wanted,
>> if
>> > so please suggest recommendations as well.
>> >
>> > Thanks
>> > Rakesh
>> >
>> >
>>
>>
>> --
>> -Regards,
>> Mayuresh R. Gharat
>> (862) 250-7125
>>


Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?

2015-12-03 Thread Marina
Thank you, Lance - this is very useful info!I did figure out what was wrong in 
my case - the offsets were not stored in KAfka legitimately, they were stored 
in Zookeeper, I was using a wrong command to inspect ZK content - doing 'ls 
' instead of 'get '.
Once I used 'get' - I could see correct offsets for all my consumers in ZK. We 
will switch to using Kafka storage very soon!

Thanks!Marina
  From: Lance Laursen 
 To: users@kafka.apache.org 
Cc: Marina 
 Sent: Thursday, December 3, 2015 4:35 PM
 Subject: Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?
   
Hi Marina,

You can hop onto your brokers and dump your __consumer_offsets logs
manually in order to see if anything is in them. Hop on each of your
brokers and run the following command:

  for f in $(find /path/to/kafka-logs/__consumer_offsets-* -name "*\.log");
do /usr/local/kafka/bin/kafka-run-class.sh kafka.tools.DumpLogSegments
--print-data-log --files $f ; done

Remember to replace /path/to/kafka-logs with your configured log.dirs
location, and /usr/local/kafka/ with wherever you have installed kafka.

If that command doesn't generate any output (Starting offset: 0 followed by
nothing) on any of your brokers, then your consumers are for some reason
not committing offsets to Kafka.

Keep in mind that commits to __consumer_offsets by the consumer API use
groupID, Topic, and PartitionID to key off of in order to determine which
partition of __consumer_offsets to save the commit message to. This means
that if you're not running many consumers/topics, you could have partitions
with nothing in them and others with lots. Keep this in mind when using
-simple-consumer scripts as the simple consumer only pulls from one
partition at a time.



On Thu, Dec 3, 2015 at 11:10 AM, Guozhang Wang  wrote:

> If you can validate these partitions have data (i.e. there are some offsets
> committed to Kafka), then you may have to turn of debug level logging in
> configs/tools-log4j.properties which will allow console consumer to print
> debug level logs and see if there is anything suspicious.
>
> Guozhang
>
> On Thu, Dec 3, 2015 at 10:55 AM, Marina  wrote:
>
> > Hi, Guozhang,
> > Yes, I can see this topic and partitions in ZK:
> >
> > ls /brokers/topics/__consumer_offsets
> > [partitions]
> > ls /brokers/topics/__consumer_offsets/partitions
> > [44, 45, 46, 47, 48, 49, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 0, 1, 2,
> > 3, 4, 5, 6, 7, 8, 9, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32,
> > 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43]
> >
> >
> > example of one partition state:
> > ls /brokers/topics/__consumer_offsets/partitions/44
> > [state]
> > ls /brokers/topics/__consumer_offsets/partitions/44/state
> > []
> >
> >
> > anything else you think I could check?
> >
> > thanks!
> > Marina
> >
> >
> >
> > - Original Message -
> > From: Guozhang Wang 
> > To: "users@kafka.apache.org" ; Marina <
> > ppi...@yahoo.com>
> > Sent: Thursday, December 3, 2015 12:05 PM
> > Subject: Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?
> >
> > Marina,
> >
> > To check if the topic does exist in Kafka (i.e. offsets are stored in
> Kafka
> > instead of in ZK) you can check this path in ZK:
> >
> > /brokers/topics/__consumer_offsets
> >
> > By default this topic should have 50 partitions.
> >
> > Guozhang
> >
> >
> >
> > On Thu, Dec 3, 2015 at 6:22 AM, Marina  wrote:
> >
> > > Hi, Jason,
> > >
> > > I tried the same command both with specifying a formatter and without -
> > > same result:
> > >
> > > => /opt/kafka/bin/kafka-console-consumer.sh --formatter
> > > kafka.server.OffsetManager\$OffsetsMessageFormatter --consumer.config
> > > /tmp/consumer.properties --topic __consumer_offsets --zookeeper
> > > localhost:2181 --from-beginning
> > >
> > > ^CConsumed 0 messages
> > >
> > >
> > > When you are saying to make sure Consumers are using Kafka instead of
> ZK
> > > storage for offset... - what is the best way to do that? I thought
> > checking
> > > the content via ZK shell - at this path:
> > >
> /consumers//offsets//
> > >
> > > would be a definitive confirmation (there are no values stored there).
> Is
> > > it true?
> > >
> > > I feel it is some small stupid mistake I'm making, since it seems to
> work
> > > fine for others - drives me crazy :)
> > >
> > > thanks!!
> > > Marina
> > >
> > >
> > >
> > > - Original Message -
> > > From: Jason Gustafson 
> > > To: users@kafka.apache.org
> > > Sent: Wednesday, December 2, 2015 2:33 PM
> > > Subject: Re: Kafka 0.8.2.1 - how to read from __consumer_offsets topic?
> > >
> > > Hey Marina,
> > >
> > > My mistake, I see you're using 0.8.2.1. Are you also providing the
> > > formatter argument when using console-consumer.sh? Perhaps something
> like
> > > this:
> > >
> > > bin/kafka-console-consumer.sh