Hi,
I’m looking for the JMX metrics to represent replica lag time for 0.9.0.1. Base
on the documentation, I can only find
kafka.server:type=ReplicaFetcherManager,name=MaxLag,clientId=Replica, which is
max lag in messages btw follower and leader replicas. But since in 0.9.0.1 lag
in messages
Hi,
You can use storm-kafka-client, which support storing consumer offsets in kafka
cluster.
Yuanjia Li
From: pradeep s
Date: 2017-02-20 03:49
To: users
Subject: Storm kafka integration
Hi,
I am using Storm 1.0.2 and Kafka 0.10.1.1 and have query on Spout code to
integrate with Kafka. As per
Hi Liwu,
Correct me if I am wrong.
When calling the method ConsumerConnector.shutdown(), it will send
"ZookeeperConsumerConnector.shutdownCommand" to the queue, not set
ConsumerIterator's state is NOT_READY directly. So the consumer will continue
consuming until get the shutdownCommand in the
Hi,
And on a side note, it's logged _many_ times. I had to suppress some
logging at package level :-/
Anybody else experiencing the same ?
Cheers,
Francesco
On 20 February 2017 at 00:04, Simon Teles wrote:
> Hello,
>
> I'm curious to know why, when the producer/consumer are
Hi,
I’m wondering if the official Kafka documentation is misleading. Here (
https://kafka.apache.org/documentation/#security_sasl_brokernotes) you can
read:
1. Client section is used to authenticate a SASL connection with
zookeeper. It also allows the brokers to set SASL ACL on zookeeper
Hi
I’d be great to document what the JAAS file may look like at:
http://docs.confluent.io/3.1.2/schema-registry/docs/security.html
I need to ask for principals from my IT which takes a while, so is this a
correct JAAS?
KafkaClient{
com.sun.security.auth.module.Krb5LoginModule required
Hello,
I'm curious to know why, when the producer/consumer are creating, the
ProducerConfig, ConsumerConfig are logged twice ? Is that normal ?
Example :
10:52:08.963 INFO [o.a.k.s.p.i.StreamThread||l.170] ~~ Creating
producer client for stream thread [StreamThread-1]
10:52:08.969 INFO
Hi team,
We are running confluent 0.9.0.1 on a cluster with 6 brokers. These days one of
our broker(broker 1) frequently shrink the ISR and expand it immediately every
about 20 minutes and I couldn’t find out why. Based on the log, I can kick out
any of other brokers, not just a specific one.
You should ask Storm people. Kafka Spout is not provided by Kafka community.
Or maybe try out Kafka's Streams API (couldn't resist... ;) )
-Matthias
On 2/19/17 11:49 AM, pradeep s wrote:
> Hi,
> I am using Storm 1.0.2 and Kafka 0.10.1.1 and have query on Spout code to
> integrate with Kafka.
Hi,
I am using Storm 1.0.2 and Kafka 0.10.1.1 and have query on Spout code to
integrate with Kafka. As per storm docs , its mentioned to use Broker Hosts
to register the Kafka Spout.
http://storm.apache.org/releases/1.0.2/storm-kafka.html
In this case will the consumer offsets be stored in
10 matches
Mail list logo