Re: delete topic causing spikes in fetch/metadata requests

2016-10-16 Thread sunil kalva
Hi
Can you guys help me with this issue

On Oct 12, 2016 10:35 PM, "sunil kalva" <sambarc...@gmail.com> wrote:

>
> We are using kafka 0.8.2.2 (client and server), when ever we delete a
> topic we see lot of errors in broker logs like below, and there is also a
> spike in fetch/metadata requests. Can i correlate these errors with topic
> delete or its a known issue. Since there is spike in metadata requests and
> fetch requests broker throughput has comedown.
>
> 
> 
> 
> --
> [2016-10-12 16:04:55,054] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,056] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,057] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,059] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,060] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,062] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,064] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,065] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,067] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,068] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,070] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,072] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request for partition [xyz,0] offset 161946645 from
> consumer with correlation id 0. Possible cause: Request for offset
> 161946645 but we only have log segments in the range 185487049 to
> 202816546. (kafka.server.ReplicaManager)
> [2016-10-12 16:04:55,073] ERROR [Replica Manager on Broker 4]: Error when
> processing fetch request fo

delete topic causing spikes in fetch/metadata requests

2016-10-12 Thread sunil kalva
We are using kafka 0.8.2.2 (client and server), when ever we delete a topic
we see lot of errors in broker logs like below, and there is also a spike
in fetch/metadata requests. Can i correlate these errors with topic delete
or its a known issue. Since there is spike in metadata requests and fetch
requests broker throughput has comedown.

--
[2016-10-12 16:04:55,054] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,056] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,057] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,059] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,060] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,062] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,064] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,065] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,067] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,068] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,070] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,072] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,073] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to
202816546. (kafka.server.ReplicaManager)
[2016-10-12 16:04:55,075] ERROR [Replica Manager on Broker 4]: Error when
processing fetch request for partition [xyz,0] offset 161946645 from
consumer with correlation id 0. Possible cause: Request for offset
161946645 but we only have log segments in the range 185487049 to

assigning jiras

2016-04-05 Thread sunil kalva
Hi
I was trying to assign a jira to myself to start working, but looks like i
don't have permission.
Can someone give me access.

my id: sunilkalva

-- 
SunilKalva


Re: Messages corrupted in kafka

2016-03-29 Thread sunil kalva
Hi
Do we store message crc also on disk, and server verifies same when we are
reading messages back from disk?
And how to handle errors when we use async publish ?

On Fri, Mar 25, 2016 at 4:17 AM, Becket Qin <becket@gmail.com> wrote:

> You mentioned that you saw few corrupted messages, (< 0.1%). If so are you
> able to see some corrupted messages if you produce, say, 10M messages?
>
> On Wed, Mar 23, 2016 at 9:40 PM, sunil kalva <kalva.ka...@gmail.com>
> wrote:
>
> >  I am using java client and kafka 0.8.2, since events are corrupted in
> > kafka broker i cant read and replay them again.
> >
> > On Thu, Mar 24, 2016 at 9:42 AM, Becket Qin <becket@gmail.com>
> wrote:
> >
> > > Hi Sunil,
> > >
> > > The messages in Kafka has a CRC stored with each of them. When consumer
> > > receives a message, it will compute the CRC from the message bytes and
> > > compare it to the stored CRC. If the computed CRC and stored CRC does
> not
> > > match, that indicates the message has corrupted. I am not sure in your
> > case
> > > why the message is corrupted. Corrupted message seems to  be pretty
> rare
> > > because the broker actually validate the CRC before it stores the
> > messages
> > > on to the disk.
> > >
> > > Is this problem reproduceable? If so, can you find out the messages
> that
> > > are corrupted? Also, are you using the Java clients or some other
> > clients?
> > >
> > > Jiangjie (Becket) Qin
> > >
> > > On Wed, Mar 23, 2016 at 8:28 PM, sunil kalva <kalva.ka...@gmail.com>
> > > wrote:
> > >
> > > > can some one help me out here.
> > > >
> > > > On Wed, Mar 23, 2016 at 7:36 PM, sunil kalva <kalva.ka...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi
> > > > > I am seeing few messages getting corrupted in kafka, It is not
> > > happening
> > > > > frequently and percentage is also very very less (less than 0.1%).
> > > > >
> > > > > Basically i am publishing thrift events in byte array format to
> kafka
> > > > > topics(with out encoding like base64), and i also see more events
> > than
> > > i
> > > > > publish (i confirm this by looking at the offset for that topic).
> > > > > For example if i publish 100 events and i see 110 as offset for
> that
> > > > topic
> > > > > (since it is in production i could not get exact messages which
> > causing
> > > > > this problem, and we will only realize this problem when we consume
> > > > because
> > > > > our thrift deserialization fails).
> > > > >
> > > > > So my question is, is there any magic byte which actually
> determines
> > > the
> > > > > boundary of the message which is same as the byte i am sending or
> or
> > > for
> > > > > any n/w issues messages get chopped and stores as one message to
> > > multiple
> > > > > messages on server side ?
> > > > >
> > > > > tx
> > > > > SunilKalva
> > > > >
> > > >
> > >
> >
>


Re: Messages corrupted in kafka

2016-03-23 Thread sunil kalva
 I am using java client and kafka 0.8.2, since events are corrupted in
kafka broker i cant read and replay them again.

On Thu, Mar 24, 2016 at 9:42 AM, Becket Qin <becket@gmail.com> wrote:

> Hi Sunil,
>
> The messages in Kafka has a CRC stored with each of them. When consumer
> receives a message, it will compute the CRC from the message bytes and
> compare it to the stored CRC. If the computed CRC and stored CRC does not
> match, that indicates the message has corrupted. I am not sure in your case
> why the message is corrupted. Corrupted message seems to  be pretty rare
> because the broker actually validate the CRC before it stores the messages
> on to the disk.
>
> Is this problem reproduceable? If so, can you find out the messages that
> are corrupted? Also, are you using the Java clients or some other clients?
>
> Jiangjie (Becket) Qin
>
> On Wed, Mar 23, 2016 at 8:28 PM, sunil kalva <kalva.ka...@gmail.com>
> wrote:
>
> > can some one help me out here.
> >
> > On Wed, Mar 23, 2016 at 7:36 PM, sunil kalva <kalva.ka...@gmail.com>
> > wrote:
> >
> > > Hi
> > > I am seeing few messages getting corrupted in kafka, It is not
> happening
> > > frequently and percentage is also very very less (less than 0.1%).
> > >
> > > Basically i am publishing thrift events in byte array format to kafka
> > > topics(with out encoding like base64), and i also see more events than
> i
> > > publish (i confirm this by looking at the offset for that topic).
> > > For example if i publish 100 events and i see 110 as offset for that
> > topic
> > > (since it is in production i could not get exact messages which causing
> > > this problem, and we will only realize this problem when we consume
> > because
> > > our thrift deserialization fails).
> > >
> > > So my question is, is there any magic byte which actually determines
> the
> > > boundary of the message which is same as the byte i am sending or or
> for
> > > any n/w issues messages get chopped and stores as one message to
> multiple
> > > messages on server side ?
> > >
> > > tx
> > > SunilKalva
> > >
> >
>


Re: Messages corrupted in kafka

2016-03-23 Thread sunil kalva
can some one help me out here.

On Wed, Mar 23, 2016 at 7:36 PM, sunil kalva <kalva.ka...@gmail.com> wrote:

> Hi
> I am seeing few messages getting corrupted in kafka, It is not happening
> frequently and percentage is also very very less (less than 0.1%).
>
> Basically i am publishing thrift events in byte array format to kafka
> topics(with out encoding like base64), and i also see more events than i
> publish (i confirm this by looking at the offset for that topic).
> For example if i publish 100 events and i see 110 as offset for that topic
> (since it is in production i could not get exact messages which causing
> this problem, and we will only realize this problem when we consume because
> our thrift deserialization fails).
>
> So my question is, is there any magic byte which actually determines the
> boundary of the message which is same as the byte i am sending or or for
> any n/w issues messages get chopped and stores as one message to multiple
> messages on server side ?
>
> tx
> SunilKalva
>


Messages corrupted in kafka

2016-03-23 Thread sunil kalva
Hi
I am seeing few messages getting corrupted in kafka, It is not happening
frequently and percentage is also very very less (less than 0.1%).

Basically i am publishing thrift events in byte array format to kafka
topics(with out encoding like base64), and i also see more events than i
publish (i confirm this by looking at the offset for that topic).
For example if i publish 100 events and i see 110 as offset for that topic
(since it is in production i could not get exact messages which causing
this problem, and we will only realize this problem when we consume because
our thrift deserialization fails).

So my question is, is there any magic byte which actually determines the
boundary of the message which is same as the byte i am sending or or for
any n/w issues messages get chopped and stores as one message to multiple
messages on server side ?

tx
SunilKalva


retrieve commit time for messages

2015-12-14 Thread sunil kalva
Hi
Is there any way to get the commit timestamp of the messages which are
retrieved using kafka consumer API.

t
SunilKalva


Spooling support for kafka publishers !

2015-08-07 Thread sunil kalva
Hi
What are the best practises to achieve spooling support on producer end if
the kafka cluster is not reachable or degraded.

We are thinking to have wrapper on kafka producer which can spool locally
if the producer cant talk to kafka cluster. Problem with this approach is,
all web servers which process requests will start having disks locally for
spooling.

Or is there any better way of doing this.

I was going through this jira, looks like it is still open
https://issues.apache.org/jira/browse/KAFKA-1955

-
SunilKalva


New (0.8.2.1) Sync Producer with Batch ?

2015-08-01 Thread sunil kalva
Can i configure new producer api with batch, and send data in batches in
synchronous mode ?

-- 
SunilKalva