Re: Latest Logstash 7.8 and compatibility with latest Kafka 2.5.0

2020-07-06 Thread allen chan
Best is to read the changelog of the plugin https://github.com/logstash-plugins/logstash-integration-kafka/blob/master/CHANGELOG.md they are up to 2.4.1 per 10.1.0 notes and you have to see what version is packaged with the release. If it is not the right version, you need to use automation or

Re: Partition reassignment data file is empty

2017-12-31 Thread allen chan
4:51 PM, Brett Rann <br...@zendesk.com.invalid> > wrote: > > > That's happening because your JSON is malformed. Losing the last comma > will > > fix it. > > > > On Sun, Dec 31, 2017 at 3:43 PM, allen chan < > allen.michael.c...@gmail.com> >

Partition reassignment data file is empty

2017-12-30 Thread allen chan
Hello Kafka Version: 0.11.0.1 I am trying to increase replication factor for a topic and i am getting the below error. Can anyone help explain what the error means? The json is not empty $ cat increase-replication-factor.json {"version":1, "partitions":[

Re: [ANNOUCE] Apache Kafka 0.10.1.1 Released

2016-12-23 Thread allen chan
>From what i can tell, it looks like the main kafka website is not updated with this release. Download page shows 0.10.1.0 as latest release. The above link for release notes does not work either. Not Found The requested URL /dist/kafka/0.10.1.1/RELEASE_NOTES.html was not found on this server.

Re: broker randomly shuts down

2016-06-30 Thread allen chan
, you'd need to look for the stderr. > > On Thu, Jun 30, 2016 at 5:07 PM allen chan <allen.michael.c...@gmail.com> > wrote: > > > Anyone else have ideas? > > > > This is still happening. I moved off zookeeper from the server to its own > > dedicated VMs.

Re: broker randomly shuts down

2016-06-30 Thread allen chan
wrote: > What about in dmesg? I have run into this issue and it was the OOM > killer. I also ran into a heap issue using too much of the direct memory > (JVM). Reducing the fetcher threads helped with that problem. > On Jun 2, 2016 12:19 PM, "allen chan" <allen.michael.c.

Re: concept of record vs request vs batch

2016-06-16 Thread allen chan
Can anyone help with this question? On Tue, Jun 14, 2016 at 1:45 PM, allen chan <allen.michael.c...@gmail.com> wrote: > Thanks for answer Otis. > The producer that i use (Logstash) does not track message sizes. > > I already loaded all the metrics from JMX into my monitorin

Re: concept of record vs request vs batch

2016-06-14 Thread allen chan
amp; Elasticsearch Consulting Support Training - http://sematext.com/ > > > On Mon, Jun 13, 2016 at 4:43 PM, allen chan <allen.michael.c...@gmail.com> > wrote: > > > In JMX for Kafka producer there are metrics for both request, record, and > > batch size Max +

concept of record vs request vs batch

2016-06-13 Thread allen chan
In JMX for Kafka producer there are metrics for both request, record, and batch size Max + Avg. What is the difference between these concepts? In the logging use case: I assume record is the single log line, batch is multiple log lines together and request is the batch wrapped with the metadata

Re: broker randomly shuts down

2016-06-02 Thread allen chan
ntu, it's pretty easy to find in > /var/log/syslog (depending on your setup). I don't know about other > operating systems. > > On Thu, Jun 2, 2016 at 5:54 AM, allen chan <allen.michael.c...@gmail.com> > wrote: > > > I have an issue where my brokers would randomly shut its

broker randomly shuts down

2016-06-01 Thread allen chan
I have an issue where my brokers would randomly shut itself down. I turned on debug in log4j.properties but still do not see a reason why the shutdown is happening. Anyone seen this behavior before? version 0.10.0 log4j.properties log4j.rootLogger=DEBUG, kafkaAppender * I tried TRACE level

Re: kafka-consumer-group.sh failed on 0.10.0 but works on 0.9.0.1

2016-05-24 Thread allen chan
use the > consumer-groups.sh script from 0.9 until all the brokers have been > upgraded. > > -Jason > > On Tue, May 24, 2016 at 6:31 PM, tao xiao <xiaotao...@gmail.com> wrote: > > > I am pretty sure consumer-group.sh uses tools-log4j.properties > > >

Re: kafka-consumer-group.sh failed on 0.10.0 but works on 0.9.0.1

2016-05-24 Thread allen chan
ers. > > Thanks, > Jason > > On Tue, May 24, 2016 at 5:21 PM, allen chan <allen.michael.c...@gmail.com> > wrote: > > > I upgraded one of my brokers to 0.10.0. I followed the upgrade guide and > > added these to my server.properties: > > > > i

kafka-consumer-group.sh failed on 0.10.0 but works on 0.9.0.1

2016-05-24 Thread allen chan
I upgraded one of my brokers to 0.10.0. I followed the upgrade guide and added these to my server.properties: inter.broker.protocol.version=0.9.0.1 log.message.format.version=0.9.0.1 When checking the lag i get this error. [ac...@ekk001.scl ~]$ sudo

Re: KAFKA-3470: treat commits as member heartbeats #1206

2016-05-22 Thread allen chan
Thank you for confirming! On Sunday, May 22, 2016, Guozhang Wang <wangg...@gmail.com> wrote: > Hello, > > KAFKA-3470 is a mainly a broker-side change, which handles the commit > request to also "reset" the timer for heartbeat as well. > > Guozhang > > On

KAFKA-3470: treat commits as member heartbeats #1206

2016-05-21 Thread allen chan
Hi, Does anyone know if this is a broker side implementation or consumer side? We deal with long processing times of polls that caused rebalances and this should fix our problem. We will be upgrading our brokers to the 0.10.x branch long before upgrading the consumers so just wanted to email

consumer offsets not updating

2016-05-07 Thread allen chan
Brokers: 0.9.0.1 Consumers: 0.8.2.2 In the normal situation my monitoring system runs the consumer groups tool to check consumer offsets. Example: [ac...@ekk001.atl kafka]$ sudo /opt/kafka/kafka_2.11-0.9.0.1/bin/kafka-consumer-groups.sh --zookeeper ekz003.atl:2181 --describe --group indexers

kafka-consumer-perf.sh

2016-02-22 Thread allen chan
Something i do not understand about this perf-test tool. 1. The legend shows 5 columns but the data shows 6 columns. I am assuming the 0 column is the one that is throwing everything off? 2. does nMsg.sec = number of message consumed per second? [bin]$ sudo ./kafka-consumer-perf-test.sh --group

Re: Questions from new user

2016-02-16 Thread allen chan
Hi can anyone help with this? On Fri, Jan 29, 2016 at 11:50 PM, allen chan <allen.michael.c...@gmail.com> wrote: > Use case: We are using kafka as broker in one of our elasticsearch > clusters. Kafka caches the logs if elasticsearch has any performance > issues. I have Kafka set

Re: Regarding issue in Kafka-0.8.2.2.3

2016-02-08 Thread allen chan
I export my JMX_PORT setting in the kafka-server-start.sh script and have not run into any issues yet. On Mon, Feb 8, 2016 at 9:01 AM, Manikumar Reddy wrote: > kafka scripts uses "kafka-run-class.sh" script to set environment variables > and run classes. So if you set any

Questions from new user

2016-01-29 Thread allen chan
for the attention. Allen Chan

Re: BrokerState JMX Metric

2015-12-06 Thread allen chan
es.scala > . > > Dong > > On Thu, Dec 3, 2015 at 7:20 PM, allen chan <allen.michael.c...@gmail.com> > wrote: > > > Hi all > > > > Does anyone have info about this JMX metric > > kafka.server:type=KafkaServer,name=BrokerState or what does the number > &

BrokerState JMX Metric

2015-12-03 Thread allen chan
Hi all Does anyone have info about this JMX metric kafka.server:type=KafkaServer,name=BrokerState or what does the number values means? -- Allen Michael Chan

Re: consumer offset tool and JMX metrics do not match

2015-11-21 Thread allen chan
thread discusses one of such issues where consumer lag was not > reported correctly. > > Regards, > Prabhjot > > On Sun, Nov 15, 2015 at 7:04 AM, allen chan <allen.michael.c...@gmail.com> > wrote: > > > I believe producers / brokers / and consumers has been restarte

Re: consumer offset tool and JMX metrics do not match

2015-11-19 Thread allen chan
Anyone can help me understand this? On Mon, Nov 16, 2015 at 11:21 PM, allen chan <allen.michael.c...@gmail.com> wrote: > According to documentation, offsets by default are committed every 10 > secs. Shouldnt that be frequent enough that JMX would be accurate? > > autocommit.

Re: consumer offset tool and JMX metrics do not match

2015-11-16 Thread allen chan
> > * This is just the *committed* offsets > > > > When the Lag value in the Kafka consumer JMX is high (for example 5M), > > ConsumerOffsetChecker shows a matching number. > > > > I am running kafka_2.10-0.8.2.1 > > > > Osama > > > > -

Re: consumer offset tool and JMX metrics do not match

2015-11-16 Thread allen chan
According to documentation, offsets by default are committed every 10 secs. Shouldnt that be frequent enough that JMX would be accurate? autocommit.interval.ms1is the frequency that the consumed offsets are committed to zookeeper. On Mon, Nov 16, 2015 at 3:31 PM, allen chan <allen.michae

Re: consumer offset tool and JMX metrics do not match

2015-11-14 Thread allen chan
after > you had started the consumption and until you see this issue ? > > Thanks, > Prabhjot > > > > On Sat, Nov 14, 2015 at 5:53 AM, allen chan <allen.michael.c...@gmail.com> > wrote: > > > I also looked at this metric in JMX and it is also 0 > > >

Re: consumer offset tool and JMX metrics do not match

2015-11-13 Thread allen chan
I also looked at this metric in JMX and it is also 0 *kafka.consumer:type=ConsumerFetcherManager,name=MaxLag,clientId=logstash* On Fri, Nov 13, 2015 at 4:06 PM, allen chan <allen.michael.c...@gmail.com> wrote: > Hi All, > > I am comparing the output from kafka.tools.ConsumerOffse

consumer offset tool and JMX metrics do not match

2015-11-13 Thread allen chan
Hi All, I am comparing the output from kafka.tools.ConsumerOffsetChecker vs JMX (kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=logstash,topic=logstash_fdm,partition=*) and they do not match. ConsumerOffsetChecker is showing ~60 Lag per partition and JMX shows 0 for all

Re: log.retention.hours not working?

2015-09-21 Thread allen chan
ds to, it will start deleting > old logs. > > On Mon, Sep 21, 2015 at 8:58 PM allen chan <allen.michael.c...@gmail.com> > wrote: > > > Hi, > > > > Just brought up new kafka cluster for testing. > > Was able to use the console producers to send 1k of logs

Re: port already in use error when trying to add topic

2015-09-14 Thread allen chan
After completely disabling JMX settings, i was able to create topics. Seems like there is an issue with using JMX with the product. Should i create bug? On Sun, Sep 13, 2015 at 9:07 PM, allen chan <allen.michael.c...@gmail.com> wrote: > Changing the port to 9998 did not help. Still

Re: port already in use error when trying to add topic

2015-09-13 Thread allen chan
Changing the port to 9998 did not help. Still the same error occurred On Sat, Sep 12, 2015 at 12:27 AM, Foo Lim <foo@vungle.com> wrote: > Try throwing > > JMX_PORT=9998 > > In front of the command. Anything other than 9994 > > Foo > > On Frida

port already in use error when trying to add topic

2015-09-11 Thread allen chan
Hi all, First time testing kafka with brand new cluster. Running into an issue that i do not understand. Server started up fine but I get error when trying to create a topic. *[achan@server1 ~]$ ps -ef | grep -i kafka* *root 6507 1 0 15:42 ?00:00:00 sudo

virtualized kafka

2015-08-31 Thread allen chan
I am currently using the Elasticsearch (ELK stack) and Redis is the current choice as broker. I want to move to a distributed broker to make that layer more HA. Currently exploring kafka as a replacement. I have a few questions: 1. I read that kafka is designed to write contents to disk and this