Re: Issue with kafka-server-stop on RedHat7

2017-05-10 Thread ravi singh
The issue might be due to
https://unix.stackexchange.com/questions/343353/ps-only-prints-up-to-4096-characters-of-any-processs-command-line

I guess the issue is with kafka version >0.10.0.

More details:
https://github.com/apache/kafka/pull/2515

Regards,
Ravi

On Tue, May 9, 2017 at 12:01 PM, Vedant Nighojkar 
wrote:

> Hi Team,
>
> We are using Apache Kafka in one of our products. We support Windows, AIX
> and Linux RedHat6 and above.
>
> I am seeing an issue with the kafka-server-stop.sh script on RedHat7
> machines. This used to work with RedHat6.
>
> ps ax |grep -i 'kafka.Kafka' - this is not able to find any running Kafka
> processes, because *ps ax* does not return the full output of the
> processes. It returns a truncated output (https://access.redhat.com/
> solutions/35925) and the process is thus not killed.
>
> Is there a solution / work-around for this?
>
> Thanks and Regards,
>
> *Vedant Nighojkar*
> Software Engineer
> IBM Analytics
> --
> *Phone:*978-899-2942 <(978)%20899-2942>
> *E-mail:* *vnig...@us.ibm.com* 
> [image: IBM]
>
> 550 King St,
> Littleton, MA 01460
> United States
>
>
>


-- 
*Regards,*
*Ravi*


Restrict consumers from connecting to Kafka cluster

2016-11-22 Thread ravi singh
Is it possible to restrict Kafka consumers from consuming from a given
 Kafka cluster?

-- 
*Regards,*
*Ravi*


Get topic level detail from new consumer group command

2016-05-05 Thread ravi singh
 ./bin/kafka-consumer-groups.sh --group batchprocessord_zero
 --bootstrap-server kafka-1-evilcorp.com:9092 --new-consumer --describe
Running the above ConsumerGroupcommad will describe consumer for all the
topics it's listening to.

Is there any workaround to get *only topic level detail*?

​
-- 
*Regards,*
*Ravi*


How to build kafka jar with all dependencies?

2016-05-03 Thread ravi singh
I used .*/gradlew jarAll* but still scala libs are missing from the jar?
​It should be something ​very simple which I might be missing. Please let
me know if anyone knows.


-- 
*Regards,*
*Ravi*


Kafka Consumer Behaviour

2015-10-13 Thread ravi singh
I was writing Kafka consumer and I have a query related to consumer
processes.

I have a consumer with groupId="testGroupId" and using the same groupId I
consume from multiple topics say, "topic1" and "topic2".

Also, assume "topic1" is already created on broker whereas "topic2" is not
yet created.

Now If I start the consumer I see consumer threads for "topic1" (which is
already created) in zookeeper nodes, but I do not see any consumer
thread(s) for  "topic2".

My question is, will the consumer thread(s) for "topic2" will be created
only after we create the topic on broker?


-- 
*Regards,*
*Ravi*


Starting KafkaMetricsReporter from Kafka Consumer/Producer

2014-10-10 Thread ravi singh
In ProducerPerformance class we use CSVMetricsReporter for metrics
reporting.
Which I think is actually started with the help of below function:
KafkaMetricsReporter.startReporters(verifiableProps)

Similarly I wrote my own producer and I have a custom implementation of
KafkaMetricsReporter.
But to use the reporter on client side i must start it first using
KafkaMetricsReporter.startReporters function.

But its not possible since I dont have access to `startReporters` in my
Producer class.
Is there any other way to start the reporters from client side?


-- 
*Regards,*
*Ravi*


Re: Starting KafkaMetricsReporter from Kafka Consumer/Producer

2014-10-10 Thread ravi singh
Never mind. I found the issue.

Thanks,
Ravi

On Fri, Oct 10, 2014 at 11:47 AM, ravi singh rrs120...@gmail.com wrote:

 In ProducerPerformance class we use CSVMetricsReporter for metrics
 reporting.
 Which I think is actually started with the help of below function:
 KafkaMetricsReporter.startReporters(verifiableProps)

 Similarly I wrote my own producer and I have a custom implementation of
 KafkaMetricsReporter.
 But to use the reporter on client side i must start it first using
 KafkaMetricsReporter.startReporters function.

 But its not possible since I dont have access to `startReporters` in my
 Producer class.
 Is there any other way to start the reporters from client side?


 --
 *Regards,*
 *Ravi*




-- 
*Regards,*
*Ravi*


connect.timeout.ms porperty not present in kafka 8 producer config

2014-10-08 Thread ravi singh
Kafka 07 has following property for producer.

connect.timeout.ms5000the maximum time spent
bykafka.producer.SyncProducer trying
to connect to the kafka broker. Once it elapses, the producer throws an
ERROR and stops.

But when i checked in Kafka 08 config , I couldn't find any such property.
Is it deprecated now?
Is there any other configuration which i can use to increase the timeout
between producer and broker?

-- 
*Regards,*
*Ravi*


Producer connection timing out

2014-10-08 Thread ravi singh
Even though I am able to ping to the broker machine from my producer
machine , the producer is throwing below expcetion while connecting to
broker.
I wanted to increase time out for producer but couldnt find any parameter
for that in kafka 8.
Any idea whats wrong here?

[2014-10-08 09:29:47,762] ERROR Producer connection to 10.22.44.555:9092
unsuccessful (kafka.producer.SyncProducer)
java.net.ConnectException: Connection timed out
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Unknown Source)
at sun.nio.ch.Net.connect(Unknown Source)
at sun.nio.ch.SocketChannelImpl.connect(Unknown Source)
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
at
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
at
kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at
kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67)
at kafka.utils.Utils$.swallow(Utils.scala:167)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.Utils$.swallowError(Utils.scala:46)
at
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67)
at
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
at scala.collection.immutable.Stream.foreach(Stream.scala:254)
at
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)
[2014-10-08 09:29:47,766] WARN Fetching topic metadata with correlation id
0 for topics [Set(test_new)] from broker [id:0,host:10.22.33.444,port:9092]
failed (kafka.client.ClientUtils$)
java.net.ConnectException: Connection timed out
at sun.nio.ch.Net.connect0(Native Method)
at sun.nio.ch.Net.connect(Unknown Source)
at sun.nio.ch.Net.connect(Unknown Source)
at sun.nio.ch.SocketChannelImpl.connect(Unknown Source)
at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
at
kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
at
kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
at
kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67)
at kafka.utils.Utils$.swallow(Utils.scala:167)
at kafka.utils.Logging$class.swallowError(Logging.scala:106)
at kafka.utils.Utils$.swallowError(Utils.scala:46)
at
kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67)
at
kafka.producer.async.ProducerSendThread.tryToHandle(ProducerSendThread.scala:104)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:87)
at
kafka.producer.async.ProducerSendThread$$anonfun$processEvents$3.apply(ProducerSendThread.scala:67)
at scala.collection.immutable.Stream.foreach(Stream.scala:254)
at
kafka.producer.async.ProducerSendThread.processEvents(ProducerSendThread.scala:66)
at kafka.producer.async.ProducerSendThread.run(ProducerSendThread.scala:44)

-- 
*Regards,*
*Ravi*


Connections from kafka consumer

2014-10-08 Thread ravi singh
I have few questions regarding Kafka Consumer.

In kafka properties we only mention the zookeeper ip's to connect to.
But I assume the consumer also connects to Kafka broker for actually
consuming the messages.

We have firewall enabled on ports, so in order to connect from my consumer
I need to open port for both zookeeper and kafka. Is my assumption correct?

Also does the consumer also connects on the broker port (which is 9092 by
default)?

-- 
*Regards,*
*Ravi*


Re: Kafka consumer - Mbean for max lag

2014-10-03 Thread ravi singh
The MaxLag mbean is only valid for an active consumer. So while the
consumer is actively running, it should be accurate.

On Fri, Oct 3, 2014 at 4:21 AM, Shah, Devang1 devang1.s...@citi.com wrote:

 Hi,

 Referring to http://kafka.apache.org/documentation.html#java


 Number of messages the consumer lags behind the producer by

 kafka.consumer:name=([-.\w]+)-MaxLag,type=ConsumerFetcherManager



 I could not find MBean kafka.consumer when I hooked up jconsole to kafka
 server. I am using kafka_2.9.2-0.8.1.1 version of Kafka.


 Thanks  Regards,
 Devang




-- 
*Regards,*
*Ravi*


Re: kafka producer performance test

2014-10-01 Thread ravi singh
It is available with Kafka package  containing the source code. Download
the package, build it and run the above command.

Regards,
Ravi

On Wed, Oct 1, 2014 at 7:55 PM, Sa Li sal...@gmail.com wrote:

 Hi, All

 I built a 3-node kafka cluster, I want to make performance test, I found
 someone post following thread, that is exactly the problem I have:
 -
 While testing kafka producer performance, I found 2 testing scripts.

 1) performance testing script in kafka distribution

 bin/kafka-producer-perf-test.sh --broker-list localhost:9092 --messages
 1000 --topic test --threads 10 --message-size 100 --batch-size 1
 --compression-codec 1

 2) performance testing script mentioned in


 https://engineering.linkedin.com/kafka/benchmarking-apache-kafka-2-million-writes-second-three-cheap-machines

 bin/kafka-run-class.sh
 org.apache.kafka.clients.tools.ProducerPerformance test6 5000 100
 -1 acks=1 bootstrap.servers=esv4-hcl198.grid.linkedin.com:9092
 buffer.memory=67108864 batch.size=8196

 based on org.apache.kafka.clients.producer.Producer.

 ——


 I was unable to duplicate either of above method, I figure the commands
 are outdated, anyone point me how to do such test with new command?


 thanks

 Alec




-- 
*Regards,*
*Ravi*


Re: BytesOutPerSec is more than BytesInPerSec.

2014-09-26 Thread ravi singh
Thanks Steven, I am using replication factor of 3 .
 As you mentioned, the replicas also consume from the leader that's the
reason for deviation in BytesInPerSec and BytesOutPersec

Regards,
Ravi

On Fri, Sep 26, 2014 at 10:27 AM, Joel Koshy jjkosh...@gmail.com wrote:

 Can you confirm that you are using a replication factor of two?  As
 Steven said, the replicas also consume from the leader. So it's your
 consumer, plus the replica.

 On Thu, Sep 25, 2014 at 10:04:29PM -0700, ravi singh wrote:
  Thanks Steven. That answers the difference in Bytes in and bytes Out per
  sec. But I was wondering why(and how) is BytesOutPerSec is calculated
 based
  on number of partition even though it is consumed only once?
 
 
  *Regards,*
  *Ravi*
 
  On Thu, Sep 25, 2014 at 9:55 PM, Steven Wu stevenz...@gmail.com wrote:
 
   couldn't see your graph. but your replicator factor is 2. then
 replication
   traffic can be the explanation. basically, BytesOut will be 2x of
 BytesIn.
  
   On Thu, Sep 25, 2014 at 6:19 PM, ravi singh rrs120...@gmail.com
 wrote:
  
I have set up my kafka broker with as single producer and consumer.
 When
   I
am plotting the graph for all topic bytes in/out per sec  i could see
   that
value of  BytesOutPerSec is more than BytesInPerSec.
Is this correct? I confirmed that my consumer is consuming the
 messages
only once. What could be the reason for this behavior?
   
[image: Inline image 1]
   
   
--
*Regards,*
*Ravi*
   
  
 
 
 
  --
  *Regards,*
  *Ravi*




-- 
*Regards,*
*Ravi*


BytesOutPerSec is more than BytesInPerSec.

2014-09-25 Thread ravi singh
I have set up my kafka broker with as single producer and consumer. When I
am plotting the graph for all topic bytes in/out per sec  i could see that
value of  BytesOutPerSec is more than BytesInPerSec.
Is this correct? I confirmed that my consumer is consuming the messages
only once. What could be the reason for this behavior?

[image: Inline image 1]


-- 
*Regards,*
*Ravi*


Re: BytesOutPerSec is more than BytesInPerSec.

2014-09-25 Thread ravi singh
Thanks Steven. That answers the difference in Bytes in and bytes Out per
sec. But I was wondering why(and how) is BytesOutPerSec is calculated based
on number of partition even though it is consumed only once?


*Regards,*
*Ravi*

On Thu, Sep 25, 2014 at 9:55 PM, Steven Wu stevenz...@gmail.com wrote:

 couldn't see your graph. but your replicator factor is 2. then replication
 traffic can be the explanation. basically, BytesOut will be 2x of BytesIn.

 On Thu, Sep 25, 2014 at 6:19 PM, ravi singh rrs120...@gmail.com wrote:

  I have set up my kafka broker with as single producer and consumer. When
 I
  am plotting the graph for all topic bytes in/out per sec  i could see
 that
  value of  BytesOutPerSec is more than BytesInPerSec.
  Is this correct? I confirmed that my consumer is consuming the messages
  only once. What could be the reason for this behavior?
 
  [image: Inline image 1]
 
 
  --
  *Regards,*
  *Ravi*
 




-- 
*Regards,*
*Ravi*


Re: Intercept broker operation in Kafka

2014-06-24 Thread ravi singh
Primarily we want to log below date(although this is not the exhaustive
list):

+ any error/exception during kafka start/stop
+ any error/exception while broker is running
+ broker state changes like leader re-election, broker goes down,
+ Current live brokers
+ new topic creation
+ when messages are deleted by broker after specified limit
+ Broker health : memory usage

Regards,
Ravi


On Tue, Jun 24, 2014 at 11:11 AM, Neha Narkhede neha.narkh...@gmail.com
wrote:

 What kind of broker metrics are you trying to push to this centralized
 logging framework?

 Thanks,
 Neha
 On Jun 23, 2014 8:51 PM, ravi singh rrs120...@gmail.com wrote:

  Thanks Guozhang/Neha for replies.
  Here's my use case:
 
  We use proprietary application logging  in our apps. We are planning to
 use
  Kafka brokers in production , but apart from the logs which are already
  logged using log4j in kafka we want to log the broker stats using our
  centralized application logging framework.
 
  Simply put I want to write an application which could start when the
 kafka
  brokers starts, read the broker state and metrics and push it to the
  centralized logging servers.
 
  In ActiveMQ we have a plugin for our proprietary logging. We intercept
  broker operation and install the plugin into the interceptor chain of the
  broker.
 
  Regards,
  Ravi
 
 
  On Mon, Jun 23, 2014 at 9:29 PM, Neha Narkhede neha.narkh...@gmail.com
  wrote:
 
   Ravi,
  
   Our goal is to provide the best implementation of a set of useful
   abstractions and features in Kafka. The motivation behind this
 philosophy
   is performance and simplicity at the cost of flexibility. In most
 cases,
  we
   can argue that the loss in flexibility is minimal since you can always
  get
   that functionality by modeling your application differently, especially
  if
   the system supports high performance. ActiveMQ has to support the JMS
   protocol and hence provide all sorts of hooks and plugins on the
 brokers
  at
   the cost of performance.
  
   Could you elaborate more on your use case? There is probably another
 way
  to
   model your application using Kafka.
  
   Thanks,
   Neha
  
  
   On Sat, Jun 21, 2014 at 9:24 AM, ravi singh rrs120...@gmail.com
 wrote:
  
How do I intercept Kakfa broker operation so that features such as
security,logging,etc can be implemented as a pluggable filter. For
   example
we have BrokerFilter class in ActiveMQ , Is there anything similar
 in
Kafka?
   
--
*Regards,*
*Ravi*
   
  
 
 
 
  --
  *Regards,*
  *Ravi*
 




-- 
*Regards,*
*Ravi*


Are Broker stats stored in zookeeper?

2014-06-23 Thread ravi singh
I want to use some vital broker stats into a different logging system.
Can i read the broker specific data from zookeeper?

-- 
*Regards,*
*Ravi*


Re: Intercept broker operation in Kafka

2014-06-23 Thread ravi singh
Thanks Guozhang/Neha for replies.
Here's my use case:

We use proprietary application logging  in our apps. We are planning to use
Kafka brokers in production , but apart from the logs which are already
logged using log4j in kafka we want to log the broker stats using our
centralized application logging framework.

Simply put I want to write an application which could start when the kafka
brokers starts, read the broker state and metrics and push it to the
centralized logging servers.

In ActiveMQ we have a plugin for our proprietary logging. We intercept
broker operation and install the plugin into the interceptor chain of the
broker.

Regards,
Ravi


On Mon, Jun 23, 2014 at 9:29 PM, Neha Narkhede neha.narkh...@gmail.com
wrote:

 Ravi,

 Our goal is to provide the best implementation of a set of useful
 abstractions and features in Kafka. The motivation behind this philosophy
 is performance and simplicity at the cost of flexibility. In most cases, we
 can argue that the loss in flexibility is minimal since you can always get
 that functionality by modeling your application differently, especially if
 the system supports high performance. ActiveMQ has to support the JMS
 protocol and hence provide all sorts of hooks and plugins on the brokers at
 the cost of performance.

 Could you elaborate more on your use case? There is probably another way to
 model your application using Kafka.

 Thanks,
 Neha


 On Sat, Jun 21, 2014 at 9:24 AM, ravi singh rrs120...@gmail.com wrote:

  How do I intercept Kakfa broker operation so that features such as
  security,logging,etc can be implemented as a pluggable filter. For
 example
  we have BrokerFilter class in ActiveMQ , Is there anything similar in
  Kafka?
 
  --
  *Regards,*
  *Ravi*
 




-- 
*Regards,*
*Ravi*


Intercept broker operation in Kafka

2014-06-21 Thread ravi singh
How do I intercept Kakfa broker operation so that features such as
security,logging,etc can be implemented as a pluggable filter. For example
we have BrokerFilter class in ActiveMQ , Is there anything similar in
Kafka?

-- 
*Regards,*
*Ravi*