Best is to read the changelog of the plugin
https://github.com/logstash-plugins/logstash-integration-kafka/blob/master/CHANGELOG.md
they are up to 2.4.1 per 10.1.0 notes and you have to see what version is
packaged with the release. If it is not the right version, you need to use
automation or
4:51 PM, Brett Rann <br...@zendesk.com.invalid>
> wrote:
>
> > That's happening because your JSON is malformed. Losing the last comma
> will
> > fix it.
> >
> > On Sun, Dec 31, 2017 at 3:43 PM, allen chan <
> allen.michael.c...@gmail.com>
>
Hello
Kafka Version: 0.11.0.1
I am trying to increase replication factor for a topic and i am getting the
below error. Can anyone help explain what the error means? The json is not
empty
$ cat increase-replication-factor.json
{"version":1,
"partitions":[
>From what i can tell, it looks like the main kafka website is not updated
with this release. Download page shows 0.10.1.0 as latest release.
The above link for release notes does not work either.
Not Found
The requested URL /dist/kafka/0.10.1.1/RELEASE_NOTES.html was not found on
this server.
, you'd need to look for the stderr.
>
> On Thu, Jun 30, 2016 at 5:07 PM allen chan <allen.michael.c...@gmail.com>
> wrote:
>
> > Anyone else have ideas?
> >
> > This is still happening. I moved off zookeeper from the server to its own
> > dedicated VMs.
wrote:
> What about in dmesg? I have run into this issue and it was the OOM
> killer. I also ran into a heap issue using too much of the direct memory
> (JVM). Reducing the fetcher threads helped with that problem.
> On Jun 2, 2016 12:19 PM, "allen chan" <allen.michael.c.
Can anyone help with this question?
On Tue, Jun 14, 2016 at 1:45 PM, allen chan <allen.michael.c...@gmail.com>
wrote:
> Thanks for answer Otis.
> The producer that i use (Logstash) does not track message sizes.
>
> I already loaded all the metrics from JMX into my monitorin
amp; Elasticsearch Consulting Support Training - http://sematext.com/
>
>
> On Mon, Jun 13, 2016 at 4:43 PM, allen chan <allen.michael.c...@gmail.com>
> wrote:
>
> > In JMX for Kafka producer there are metrics for both request, record, and
> > batch size Max +
In JMX for Kafka producer there are metrics for both request, record, and
batch size Max + Avg.
What is the difference between these concepts?
In the logging use case: I assume record is the single log line, batch is
multiple log lines together and request is the batch wrapped with the
metadata
ntu, it's pretty easy to find in
> /var/log/syslog (depending on your setup). I don't know about other
> operating systems.
>
> On Thu, Jun 2, 2016 at 5:54 AM, allen chan <allen.michael.c...@gmail.com>
> wrote:
>
> > I have an issue where my brokers would randomly shut its
I have an issue where my brokers would randomly shut itself down.
I turned on debug in log4j.properties but still do not see a reason why the
shutdown is happening.
Anyone seen this behavior before?
version 0.10.0
log4j.properties
log4j.rootLogger=DEBUG, kafkaAppender
* I tried TRACE level
use the
> consumer-groups.sh script from 0.9 until all the brokers have been
> upgraded.
>
> -Jason
>
> On Tue, May 24, 2016 at 6:31 PM, tao xiao <xiaotao...@gmail.com> wrote:
>
> > I am pretty sure consumer-group.sh uses tools-log4j.properties
> >
>
ers.
>
> Thanks,
> Jason
>
> On Tue, May 24, 2016 at 5:21 PM, allen chan <allen.michael.c...@gmail.com>
> wrote:
>
> > I upgraded one of my brokers to 0.10.0. I followed the upgrade guide and
> > added these to my server.properties:
> >
> > i
I upgraded one of my brokers to 0.10.0. I followed the upgrade guide and
added these to my server.properties:
inter.broker.protocol.version=0.9.0.1
log.message.format.version=0.9.0.1
When checking the lag i get this error.
[ac...@ekk001.scl ~]$ sudo
Thank you for confirming!
On Sunday, May 22, 2016, Guozhang Wang <wangg...@gmail.com> wrote:
> Hello,
>
> KAFKA-3470 is a mainly a broker-side change, which handles the commit
> request to also "reset" the timer for heartbeat as well.
>
> Guozhang
>
> On
Hi,
Does anyone know if this is a broker side implementation or consumer side?
We deal with long processing times of polls that caused rebalances and this
should fix our problem.
We will be upgrading our brokers to the 0.10.x branch long before upgrading
the consumers so just wanted to email
Brokers: 0.9.0.1
Consumers: 0.8.2.2
In the normal situation my monitoring system runs the consumer groups tool
to check consumer offsets.
Example:
[ac...@ekk001.atl kafka]$ sudo
/opt/kafka/kafka_2.11-0.9.0.1/bin/kafka-consumer-groups.sh --zookeeper
ekz003.atl:2181 --describe --group indexers
Something i do not understand about this perf-test tool.
1. The legend shows 5 columns but the data shows 6 columns.
I am assuming the 0 column is the one that is throwing everything off?
2. does nMsg.sec = number of message consumed per second?
[bin]$ sudo ./kafka-consumer-perf-test.sh --group
Hi can anyone help with this?
On Fri, Jan 29, 2016 at 11:50 PM, allen chan <allen.michael.c...@gmail.com>
wrote:
> Use case: We are using kafka as broker in one of our elasticsearch
> clusters. Kafka caches the logs if elasticsearch has any performance
> issues. I have Kafka set
I export my JMX_PORT setting in the kafka-server-start.sh script and have
not run into any issues yet.
On Mon, Feb 8, 2016 at 9:01 AM, Manikumar Reddy
wrote:
> kafka scripts uses "kafka-run-class.sh" script to set environment variables
> and run classes. So if you set any
for the attention.
Allen Chan
es.scala
> .
>
> Dong
>
> On Thu, Dec 3, 2015 at 7:20 PM, allen chan <allen.michael.c...@gmail.com>
> wrote:
>
> > Hi all
> >
> > Does anyone have info about this JMX metric
> > kafka.server:type=KafkaServer,name=BrokerState or what does the number
> &
Hi all
Does anyone have info about this JMX metric
kafka.server:type=KafkaServer,name=BrokerState or what does the number
values means?
--
Allen Michael Chan
thread discusses one of such issues where consumer lag was not
> reported correctly.
>
> Regards,
> Prabhjot
>
> On Sun, Nov 15, 2015 at 7:04 AM, allen chan <allen.michael.c...@gmail.com>
> wrote:
>
> > I believe producers / brokers / and consumers has been restarte
Anyone can help me understand this?
On Mon, Nov 16, 2015 at 11:21 PM, allen chan <allen.michael.c...@gmail.com>
wrote:
> According to documentation, offsets by default are committed every 10
> secs. Shouldnt that be frequent enough that JMX would be accurate?
>
> autocommit.
>
> * This is just the *committed* offsets
>
>
> > When the Lag value in the Kafka consumer JMX is high (for example 5M),
> > ConsumerOffsetChecker shows a matching number.
> >
> > I am running kafka_2.10-0.8.2.1
> >
> > Osama
> >
> > -
According to documentation, offsets by default are committed every 10 secs.
Shouldnt that be frequent enough that JMX would be accurate?
autocommit.interval.ms1is the frequency that the consumed offsets are
committed to zookeeper.
On Mon, Nov 16, 2015 at 3:31 PM, allen chan <allen.michae
after
> you had started the consumption and until you see this issue ?
>
> Thanks,
> Prabhjot
>
>
>
> On Sat, Nov 14, 2015 at 5:53 AM, allen chan <allen.michael.c...@gmail.com>
> wrote:
>
> > I also looked at this metric in JMX and it is also 0
> >
>
I also looked at this metric in JMX and it is also 0
*kafka.consumer:type=ConsumerFetcherManager,name=MaxLag,clientId=logstash*
On Fri, Nov 13, 2015 at 4:06 PM, allen chan <allen.michael.c...@gmail.com>
wrote:
> Hi All,
>
> I am comparing the output from kafka.tools.ConsumerOffse
Hi All,
I am comparing the output from kafka.tools.ConsumerOffsetChecker vs JMX
(kafka.server:type=FetcherLagMetrics,name=ConsumerLag,clientId=logstash,topic=logstash_fdm,partition=*)
and they do not match.
ConsumerOffsetChecker is showing ~60 Lag per partition and JMX shows 0 for
all
ds to, it will start deleting
> old logs.
>
> On Mon, Sep 21, 2015 at 8:58 PM allen chan <allen.michael.c...@gmail.com>
> wrote:
>
> > Hi,
> >
> > Just brought up new kafka cluster for testing.
> > Was able to use the console producers to send 1k of logs
After completely disabling JMX settings, i was able to create topics. Seems
like there is an issue with using JMX with the product. Should i create bug?
On Sun, Sep 13, 2015 at 9:07 PM, allen chan <allen.michael.c...@gmail.com>
wrote:
> Changing the port to 9998 did not help. Still
Changing the port to 9998 did not help. Still the same error occurred
On Sat, Sep 12, 2015 at 12:27 AM, Foo Lim <foo@vungle.com> wrote:
> Try throwing
>
> JMX_PORT=9998
>
> In front of the command. Anything other than 9994
>
> Foo
>
> On Frida
Hi all,
First time testing kafka with brand new cluster.
Running into an issue that i do not understand.
Server started up fine but I get error when trying to create a topic.
*[achan@server1 ~]$ ps -ef | grep -i kafka*
*root 6507 1 0 15:42 ?00:00:00 sudo
I am currently using the Elasticsearch (ELK stack) and Redis is the current
choice as broker.
I want to move to a distributed broker to make that layer more HA.
Currently exploring kafka as a replacement.
I have a few questions:
1. I read that kafka is designed to write contents to disk and this
35 matches
Mail list logo