Hi
Can you guys help me with this issue
On Oct 12, 2016 10:35 PM, "sunil kalva" <sambarc...@gmail.com> wrote:
>
> We are using kafka 0.8.2.2 (client and server), when ever we delete a
> topic we see lot of errors in broker logs like below, and there is also a
> spik
We are using kafka 0.8.2.2 (client and server), when ever we delete a topic
we see lot of errors in broker logs like below, and there is also a spike
in fetch/metadata requests. Can i correlate these errors with topic delete
or its a known issue. Since there is spike in metadata requests and fetch
Hi
I was trying to assign a jira to myself to start working, but looks like i
don't have permission.
Can someone give me access.
my id: sunilkalva
--
SunilKalva
messages, (< 0.1%). If so are you
> able to see some corrupted messages if you produce, say, 10M messages?
>
> On Wed, Mar 23, 2016 at 9:40 PM, sunil kalva <kalva.ka...@gmail.com>
> wrote:
>
> > I am using java client and kafka 0.8.2, since events are corrupted in
>
(Becket) Qin
>
> On Wed, Mar 23, 2016 at 8:28 PM, sunil kalva <kalva.ka...@gmail.com>
> wrote:
>
> > can some one help me out here.
> >
> > On Wed, Mar 23, 2016 at 7:36 PM, sunil kalva <kalva.ka...@gmail.com>
> > wrote:
> >
> > > Hi
&
can some one help me out here.
On Wed, Mar 23, 2016 at 7:36 PM, sunil kalva <kalva.ka...@gmail.com> wrote:
> Hi
> I am seeing few messages getting corrupted in kafka, It is not happening
> frequently and percentage is also very very less (less than 0.1%).
>
> Basically i
Hi
I am seeing few messages getting corrupted in kafka, It is not happening
frequently and percentage is also very very less (less than 0.1%).
Basically i am publishing thrift events in byte array format to kafka
topics(with out encoding like base64), and i also see more events than i
publish (i
Hi
Is there any way to get the commit timestamp of the messages which are
retrieved using kafka consumer API.
t
SunilKalva
Hi
What are the best practises to achieve spooling support on producer end if
the kafka cluster is not reachable or degraded.
We are thinking to have wrapper on kafka producer which can spool locally
if the producer cant talk to kafka cluster. Problem with this approach is,
all web servers which
Can i configure new producer api with batch, and send data in batches in
synchronous mode ?
--
SunilKalva
10 matches
Mail list logo