Re: [ANNOUNCE] New Kafka PMC member: Matthias J. Sax

2019-04-18 Thread Ankur Rana
gt; > continued > > > to be active in the community and made significant contributions the > > > project. > > > > > > > > > Congratulations to Matthias! > > > > > > -- Guozhang > > > > > > -- Thanks, Ankur Rana Software Developer FarEye

Re: Broker suddenly becomes unstable after upgrade to 2.1.0

2019-03-12 Thread Ankur Rana
>> maxBytes=1048576, currentLeaderEpoch=Optional[813])}, > >> isolationLevel=READ_UNCOMMITTED, toForget=, > >metadata=(sessionId=519957053, > >> epoch=INITIAL)) (kafka.server.ReplicaFetcherThread) > >> java.net.SocketTimeoutException: Failed to connect within 3 ms > >> at > >> > > >kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:93) > >> at > >> > > >kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:190) > >> at > >> > > >kafka.server.AbstractFetcherThread.kafka$server$AbstractFetcherThread$$processFetchRequest(AbstractFetcherThread.scala:241) > >> at > >> > > >kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:130) > >> at > >> > > >kafka.server.AbstractFetcherThread$$anonfun$maybeFetch$1.apply(AbstractFetcherThread.scala:129) > >> at scala.Option.foreach(Option.scala:257) > >> at > >> > > >kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:129) > >> at > >> > >kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:111) > >> at > >kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) > >> > >> > >> > >> > -- Thanks, Ankur Rana Software Developer FarEye

Re: Upgrade from version 2.1.0 to 2.1.1

2019-02-22 Thread Ankur Rana
I’ve done it recently and it worked fine. > > Thanks, > > On Fri, 22 Feb 2019 at 07:47, Ankur Rana wrote: > > > Hi, > > I'll be upgrading Kafka version from 2.1.0 to 2.1.1. Are there any > special > > steps to take? I'll be doing any Kafka upgrade for th

Upgrade from version 2.1.0 to 2.1.1

2019-02-21 Thread Ankur Rana
th the new version. 6. Once the server is up and running, I will follow the same steps with another broker. We have 5 such brokers. Just wanted to check if this is an okay way to upgrade Kafka version from 2.1.0 to 2.1.1? -- Thanks, Ankur Rana Software Developer FarEye

Re: COORDINATOR_NOT_AVAILABLE exception on the broker side and Disconnection Exception on the consumer side breaks the entire cluster

2019-02-18 Thread Ankur Rana
, 2019 at 3:30 AM Ankur Rana wrote: > Hi Ismael, > > Thank you for replying. > > We are using kafka version 2.1.0 > and Kafka streams version 2.0.0 > > Just you let you know, I was able to fix the problem by changing > processing guarantee config from exactly once to

Re: COORDINATOR_NOT_AVAILABLE exception on the broker side and Disconnection Exception on the consumer side breaks the entire cluster

2019-02-16 Thread Ankur Rana
n Sat, Feb 16, 2019 at 10:32 PM Ismael Juma wrote: > Hi, > > What version of Kafka are you using? > > Ismael > > On Fri, Feb 15, 2019 at 8:32 PM Ankur Rana > wrote: > > > Any comments anyone? > > > > On Fri, Feb 15, 2019 at 6:08 PM Ankur Rana > >

Re: COORDINATOR_NOT_AVAILABLE exception on the broker side and Disconnection Exception on the consumer side breaks the entire cluster

2019-02-15 Thread Ankur Rana
Any comments anyone? On Fri, Feb 15, 2019 at 6:08 PM Ankur Rana wrote: > Hi everyone, > > We have a Kafka cluster with 5 brokers with all topics having at least 2 > replication factor. We have multiple Kafka consumers applications running > on this cluster. Most of these cons

COORDINATOR_NOT_AVAILABLE exception on the broker side and Disconnection Exception on the consumer side breaks the entire cluster

2019-02-15 Thread Ankur Rana
ny more details. Stream config : [image: image.png] Stream application code : https://codeshare.io/Gq6pLB -- Thanks, Ankur Rana Software Developer FarEye

Re: Kafka Streams KGroupedTable.count() method returning negative values.

2019-02-08 Thread Ankur Rana
negative by observing the results of the > count().toStream() before the mapValues call? > > > Thanks! > Bill > > On Fri, Feb 8, 2019 at 1:31 PM Ankur Rana > wrote: > > > Hi Bill, > > > > I will try to make that change but since the negative values a

Re: Kafka Streams KGroupedTable.count() method returning negative values.

2019-02-08 Thread Ankur Rana
) > > .mapValues((k,v)-> new JobSummary(k,v)) > > .peek((k,v)->{ > > log.info(k.toString()); > > log.info(v.toString()); > > }).selectKey((k,v)-> v.getCompany_id()) // So that the count > > is consumed in order for each company > > .to(JOB_SUMMARY,Produced.with(Serdes.Long(),jobSummarySerde)); > > > > > > -- > > Thanks, > > > > Ankur Rana > > Software Developer > > FarEye > > > -- Thanks, Ankur Rana Software Developer FarEye

Kafka Streams KGroupedTable.count() method returning negative values.

2019-02-08 Thread Ankur Rana
new JobSummary(k,v)) .peek((k,v)->{ log.info(k.toString()); log.info(v.toString()); }).selectKey((k,v)-> v.getCompany_id()) // So that the count is consumed in order for each company .to(JOB_SUMMARY,Produced.with(Serdes.Long(),jobSummarySerde));

Re: SIGSEGV (0xb) on TransactionCoordinator

2019-01-10 Thread Ankur Rana
> enable > > > > > >> core > > > > > >>>> dumping, try "ulimit -c unlimited" before starting Java again # # > > > > > >>>> If you would like to submit a bug report, please visit: > > > > > >>>> # http://bugreport.java.com/bugreport/crash.jsp > > > > > >>>> # > > > > > >>>> --- T H R E A D --- Current thread > > > > > >>>> (0x7f547a29e800): JavaThread > > > > > >> "kafka-request-handler-5" > > > > > >>>> daemon [_thread_in_Java, id=13722, > > > > > >>>> stack(0x7f53700f9000,0x7f53701fa000)] > > > > > >>>> siginfo: si_signo: 11 (SIGSEGV), si_code: 1 (SEGV_MAPERR), > si_addr: > > > > > >>>> 0xdd310c13 > > > > > >>>> Registers: > > > > > >>>> RAX=0x0001, RBX=0x0006e9072fc8, > > > > > RCX=0x0688, > > > > > >>>> RDX=0x00075e026fc0 > > > > > >>>> RSP=0x7f53701f7f00, RBP=0x0006e98861f8, > > > > > RSI=0x7f53771a4238, > > > > > >>>> RDI=0x0006e9886098 > > > > > >>>> R8 =0x132d, R9 =0xdd310c13, > > > > > R10=0x0007c010bbb0, > > > > > >>>> R11=0xdd310c13 > > > > > >>>> R12=0x, R13=0xdd310b3d, > > > > > R14=0xdd310c0c, > > > > > >>>> R15=0x7f547a29e800 > > > > > >>>> RIP=0x7f546a857d0d, EFLAGS=0x00010202, > > > > > >>>> CSGSFS=0x002b0033, ERR=0x0004 > > > > > >>>> TRAPNO=0x000e > > > > > >>> Thanks, > > > > > >>> > > > > > >> > > > > > > > > > > > > > -- Thanks, Ankur Rana Software Developer FarEye

What is the average Garbage Collection time in your production environment.

2019-01-08 Thread Ankur Rana
Hello Guys, Can you please provide some insights into how much Average GC usage is in your Kafka brokers. I am seeing really high GC usage in some of our brokers. Sometimes it gets as high as 30% and our producers start lagging. -- Thanks, Ankur Rana Software Developer FarEye

Can anyone help me with these questions on kafka?

2019-01-05 Thread Ankur Rana
https://stackoverflow.com/questions/54039216/how-come-kafka-fails-to-commit-offset-for-a-particular-partition https://stackoverflow.com/questions/54020753/why-is-kafka-producer-perf-test-sh-throwing-error -- Thanks, Ankur Rana Software Developer FarEye