Re: Topics being automatically deleted?

2016-09-15 Thread Manikumar Reddy
looks like you have not changed the default data log directory. By default kafka is configured to store the data logs to /tmp/ folder. /tmp gets cleared on system reboots. change log.dirs config property to some other directory. On Thu, Sep 15, 2016 at 11:46 AM, Ali Akhtar

Re: [ANNOUNCE] New committer: Jason Gustafson

2016-09-06 Thread Manikumar Reddy
congrats, Jason! On Wed, Sep 7, 2016 at 9:28 AM, Ashish Singh wrote: > Congrats, Jason! > > On Tuesday, September 6, 2016, Jason Gustafson wrote: > > > Thanks all! > > > > On Tue, Sep 6, 2016 at 5:13 PM, Becket Qin >

Re: Understand producer metrics

2016-08-18 Thread Manikumar Reddy
This doc link may help: http://kafka.apache.org/documentation.html#new_producer_monitoring On Fri, Aug 19, 2016 at 2:36 AM, David Yu wrote: > Kafka users, > > I want to resurface this post since it becomes crucial for our team to > understand our recent Samza throughput

Re: Unable to write, leader not available

2016-08-03 Thread Manikumar Reddy
Hi, Can you enable Authorization debug logs and check for logs related to denied operations.. we should also enable operations on Cluster resource. Thanks, Manikumar On Thu, Aug 4, 2016 at 1:51 AM, Bryan Baugher wrote: > Hi everyone, > > I was trying out kerberos on Kafka

Re: [kafka-clients] [VOTE] 0.10.0.1 RC1

2016-08-03 Thread Manikumar Reddy
Hi, There are two versions of slf4j-log4j jar in the build. (1.6.1, 1.7.21). slf4j-log4j12-1.6.1.jar is coming from streams:examples module. Thanks, Manikumar On Tue, Aug 2, 2016 at 8:31 PM, Ismael Juma wrote: > Hello Kafka users, developers and client-developers, > > This

Re: Topic not getting deleted on 0.8.2.1

2016-07-28 Thread Manikumar Reddy
many delete topic functionality related issues got fixed in latest versions. It highly recommend to move to latest version. https://issues.apache.org/jira/browse/KAFKA-1757 fixes similar issue on windows platform. On Thu, Jul 28, 2016 at 3:40 PM, Ghosh, Prabal Kumar < prabal.kumar.gh...@sap.com>

Re: Synchronized block in StreamTask

2016-07-28 Thread Manikumar Reddy
You already got reply from Guozhang on dev mailing list. On Thu, Jul 28, 2016 at 7:09 AM, Pierre Coquentin < pierre.coquen...@gmail.com> wrote: > Hi, > > I've a simple technical question about kafka streams. > In class org.apache.kafka.streams.processor.internals.StreamTask, the > method

Re: Log retention not working

2016-07-27 Thread Manikumar Reddy
also check if any value set for log.retention.bytes broker config On Wed, Jul 27, 2016 at 8:03 PM, Samuel Taylor wrote: > Is it possible that your log directory is in /tmp/ and your OS is deleting > that directory? I know it's happened to me before. > > - Samuel > > On

Re: Consumer Offsets and Open FDs

2016-07-19 Thread Manikumar Reddy
are a) upgrade b) backport the patch yourself. b) seems extremely risky to > me > > Thanks > > Tom > > On Tue, Jul 19, 2016 at 5:49 AM, Manikumar Reddy < > manikumar.re...@gmail.com> > wrote: > > > Try increasing log cleaner threads. > > > > On Tue, Ju

Re: Consumer Offsets and Open FDs

2016-07-18 Thread Manikumar Reddy
>at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63) > >[2016-06-24 09:57:39,881] INFO [kafka-log-cleaner-thread-0], Stopped > (kafka.log.LogCleaner) > > > > > >Is log.cleaner.dedupe.buffer.size a broker setting? What is a good > number to set it to? &

Re: Enabling PLAINTEXT inter broker security

2016-07-15 Thread Manikumar Reddy
Hi, Which Kafka version you are using? SASL/PLAIN support is available from Kafka 0.10.0.0 release onwards. Thanks Manikumar On Fri, Jul 15, 2016 at 4:22 PM, cs user wrote: > Apologies, just to me clear, my broker settings are actually as below, > using PLAINTEXT

Re: Consumer Offsets and Open FDs

2016-07-13 Thread Manikumar Reddy
? > > Thanks again! > > > Lawrence Weikum > > On 7/13/16, 10:34 AM, "Manikumar Reddy" <manikumar.re...@gmail.com> wrote: > > Hi, > > Are you seeing any errors in log-cleaner.log? The log-cleaner thread can > crash on certain errors. > > Thank

Re: Consumer Offsets and Open FDs

2016-07-13 Thread Manikumar Reddy
Hi, Are you seeing any errors in log-cleaner.log? The log-cleaner thread can crash on certain errors. Thanks Manikumar On Wed, Jul 13, 2016 at 9:54 PM, Lawrence Weikum wrote: > Hello, > > We’re seeing a strange behavior in Kafka 0.9.0.1 which occurs about every > other

Fwd: consumer.subscribe(Pattern p , ..) method fails with Authorizer

2016-07-08 Thread Manikumar Reddy
Hi, consumer.subscribe(Pattern p , ..) method implementation tries to get metadata of all the topics. This will throw TopicAuthorizationException on internal topics and other unauthorized topics. We may need to move the pattern matching to sever side. Is this know issue?. If not, I will raise

Re: Log retention just for offset topic

2016-06-29 Thread Manikumar Reddy
Hi, Kafka internally creates the offsets topic (__consumer_offsets) with compact mode on. >From 0.9.0.1 onwards log.cleaner.enable=true by default. This means topics with a cleanup.policy=compact will now be compacted by default, You can tweak the offset topic configuration by using below

Re: [DISCUSS] Java 8 as a minimum requirement

2016-06-17 Thread Manikumar Reddy
I agree with Harsha and Marcus. Many of the kafka users are still on java 7 and some of them definitely upgrade to newer versions. We may need to support for a while. We can remove the support from next major version onwards. Thanks, Manikumar On Fri, Jun 17, 2016 at 2:04 PM, Marcus Gründler

Re: Automatic Broker Id Generation

2016-05-20 Thread Manikumar Reddy
ed in meta.properties. Am i right? > > Thanks > > On Thu, May 19, 2016 at 7:14 PM, Manikumar Reddy < > manikumar.re...@gmail.com> > wrote: > > > Auto broker id generation logic: > > 1. If there is a user provided broker.id, then it is used and id range > is

Re: [COMMERCIAL] Re: [COMMERCIAL] Re: download - 0.10.0.0 RC6

2016-05-19 Thread Manikumar Reddy
Hi, commitId is nothing but latest git commit hash of the release. This is taken while building binary distribution. commitId is avilable in binary release (kafka_2.10-0.10.0.0.tgz) commitId will not be available if you build from source release (kafka-0.10.0.0-src.tgz). On Wed, May 18, 2016 at

Re: Automatic Broker Id Generation

2016-05-19 Thread Manikumar Reddy
Auto broker id generation logic: 1. If there is a user provided broker.id, then it is used and id range is from 0 to reserved.broker.max.id 2. If there is no user provided broker.id, then auto id generation starts from reserved.broker.max.id +1 3. broker.id is stored in meta.properties file under

Re: client.id, v9 consumer, metrics, JMX and quotas

2016-05-11 Thread Manikumar Reddy
Hi, This is known issue. Check below links for related discussion https://issues.apache.org/jira/browse/KAFKA-3494 https://qnalist.com/questions/6420696/discuss-mbeans-overwritten-with-identical-clients-on-a-single-jvm Manikumar On Wed, May 11, 2016 at 7:29 PM, Paul Mackles

Re: How to work around log compaction error (0.8.2.2)

2016-04-27 Thread Manikumar Reddy
Hi, Are you enabling log compaction on a topic with compressed messages? If yes, then that might be the reason for the exception. 0.8.2.2 Log Compaction does not support compressed messages. This got fixed in 0.9.0.0 (KAFKA-1641, KAFKA-1374) Check below mail thread for some corrective

Re: Best Guide/link for Kafka Ops work

2016-04-21 Thread Manikumar Reddy
This book can help you: Kafka: The Definitive Guide ( http://shop.oreilly.com/product/0636920044123.do) On Thu, Apr 21, 2016 at 9:38 PM, Mudit Agarwal wrote: > Hi, > Any recommendations for any online guide/link on managing/Administration > of kafka cluster. >

Re: Compaction does not seem to kick in

2016-04-21 Thread Manikumar Reddy
Did you set broker config property log.cleanup.policy=compact or topic level property cleanup.policy=compact ? On Thu, Apr 21, 2016 at 7:16 PM, Kasim Doctor wrote: > Hi everyone, > > I have a cluster of 5 brokers with Kafka 2.10_0.8.2.1 and one of the > topics compacted

Re: Metrics for Log Compaction

2016-04-15 Thread Manikumar Reddy
Hi, log compaction related JMX metric object names are given below. kafka.log:type=LogCleaner,name=cleaner-recopy-percent kafka.log:type=LogCleaner,name=max-buffer-utilization-percent kafka.log:type=LogCleaner,name=max-clean-time-secs kafka.log:type=LogCleanerManager,name=max-dirty-percent

Re: Metrics for Log Compaction

2016-04-15 Thread Manikumar Reddy
Hi, kafka.log:type=LogCleaner,name=cleaner-recopy-percent kafka.log:type=LogCleanerManager,name=max-dirty-percent kafka.log:type=LogCleaner,name=max-clean-time-secs After every compaction cycle, we also print some useful statistics to logs/log-cleaner.log file. On Wed, Apr 13, 2016 at 7:16

Re: Control the amount of messages batched up by KafkaConsumer.poll()

2016-04-12 Thread Manikumar Reddy
t; Oleg > > On Apr 12, 2016, at 9:22 AM, Manikumar Reddy <manikumar.re...@gmail.com> > wrote: > > > > New consumer config property "max.poll.records" is getting introduced > in > > upcoming 0.10 release. > > This property can be used to contro

Re: Control the amount of messages batched up by KafkaConsumer.poll()

2016-04-12 Thread Manikumar Reddy
New consumer config property "max.poll.records" is getting introduced in upcoming 0.10 release. This property can be used to control the no. of records in each poll. Manikumar On Tue, Apr 12, 2016 at 6:26 PM, Oleg Zhurakousky < ozhurakou...@hortonworks.com> wrote: > Is there a way to specify

Re: KafkaProducer Retries in .9.0.1

2016-04-05 Thread Manikumar Reddy
Hi, Producer message size validation checks ("buffer.memory", "max.request.size" ) happens before batching and sending messages. Retry mechanism is applicable for broker side errors and network errors. Try changing "message.max.bytes" broker config property for simulating broker side error.

Re: consumer too fast

2016-03-31 Thread Manikumar Reddy
Hi, 1. New config property "max.poll.records" is getting introduced in upcoming 0.10 release. This property can be used to control the no. of records in each poll. 2. We can use the combination of ExecutorService/Processing Thread and Pause/Resume API to handle unwanted rebalances. Some

Re: Is it safe to send messages to Kafka when one of the brokers is down?

2016-03-28 Thread Manikumar Reddy
Hi, 1. Your topic partitions are not replicated (replication factor =1). Increase replication factor for better fault tolerance. With proper replication, Kafka Brokers/Producers can handle node failures without data loss. 2. Looks like Kafka brokers are not in a cluster. They might

Re: Queue implementation

2016-03-28 Thread Manikumar Reddy
Yes. your scenarios are easy to implement using Kafka. Pl go through Kafka documentation and examples for better understanding of Kafka concepts, use cases and design. https://kafka.apache.org/documentation.html https://github.com/apache/kafka/tree/trunk/examples On Tue, Mar 29, 2016 at 9:20 AM,

Re: Custom serializer/deserializer for kafka 0.9.x version

2016-03-28 Thread Manikumar Reddy
Hi, You need to implement org.apache.kafka.common.serialization.Serializer, org.apache.kafka.common.serialization.Deserializer interfaces. Encoder, Decoder interfaces are for older clients. Example code: https://github.com/omkreddy/kafka-example

Re: Multiple Topics and Consumer Groups

2016-03-27 Thread Manikumar Reddy
A consumer can belong to only one consumer group. https://kafka.apache.org/documentation.html#intro_consumers On Mon, Mar 28, 2016 at 11:01 AM, Vinod Kakad wrote: > Hi, > > I wanted to know if same consumer can be in two consumer groups. > > OR > > How the multiple topic

Re: Offset after message deletion

2016-03-27 Thread Manikumar Reddy
It will continue from the latest offset. offset is a increasing, contiguous sequence number per partition. On Mon, Mar 28, 2016 at 9:11 AM, Imre Nagi wrote: > Hi All, > > I'm new in kafka. So, I have a question related to kafka offset. > > From the kafka documentation

Re: Re: Topics in Kafka

2016-03-23 Thread Manikumar Reddy
to do the clustering in Storm or Spark Streaming afterwards? > > Thank you in advance. > > Regards, > Daniela > > > > Gesendet: Mittwoch, 23. März 2016 um 09:42 Uhr > Von: "Manikumar Reddy" <ku...@nmsworks.co.in> > An: "users@kafka.apache.org" <u

Re: Topics in Kafka

2016-03-23 Thread Manikumar Reddy
Hi, 1. Based on your design, it can be one or more topics. You can design one topic per region or one topic for all region devices. 2. Yes, you need to listen to web socket messages and write to kafka server using kafka producer. In your use case, you can also send messages using Kafka

Re: Reading data from sensors

2016-03-23 Thread Manikumar Reddy
Hi, you can use librdkafka C library for producing data. https://github.com/edenhill/librdkafka Manikumar On Wed, Mar 23, 2016 at 12:41 PM, Shashidhar Rao wrote: > Hi, > > Can someone help me with reading data from sensors and storing into Kafka. > > At the

Re: Reg : Unable to produce message

2016-03-20 Thread Manikumar Reddy
We may get few warning exceptions, on first produce to unknown topic , with default server config property auto.create.topics.enable = true. If this is the case, then it is harmless exception. On Sun, Mar 20, 2016 at 11:19 AM, Mohamed Ashiq wrote: > All, > > I am

Re: Larger Size Error Message

2016-03-19 Thread Manikumar Reddy
DumpLogSegments tool is used to dump partition data logs (not application logs). Usage: ./bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files /tmp/kafka-logs/TEST-TOPIC-0/.log Use --key-decoder-class , --key-decoder-class options to pass deserializers. On Fri, Mar

Re: Larger Size Error Message

2016-03-19 Thread Manikumar Reddy
18, 2016 at 12:31 PM, Manikumar Reddy <ku...@nmsworks.co.in> wrote: > DumpLogSegments tool is used to dump partition data logs (not application > logs). > > Usage: > ./bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files > /tmp/kafka-logs/TEST-TOPIC-0/000

Re: Kafka 0.8.1.1 keeps full GC

2016-03-13 Thread Manikumar Reddy
Hi, These logs are minor GC logs and they look normal. Look for the word 'Full' for full gc log details. On Sun, Mar 13, 2016 at 3:06 PM, li jinyu wrote: > I'm using Kafka 0.8.1.1, have 10 nodes in a cluster, all are started with > default command: >

Re: Kafka 0.9.0.1 broker 0.9 consumer location of consumer group data

2016-03-09 Thread Manikumar Reddy
We need to pass "--new-consumer" property to kafka-consumer-groups.sh command to use new consumer. sh kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list --new-consumer On Thu, Mar 10, 2016 at 12:02 PM, Rajiv Kurian wrote: > Hi Guozhang, > > I tried using

Re: Regarding issue in Kafka-0.8.2.2.3

2016-02-08 Thread Manikumar Reddy
kafka scripts uses "kafka-run-class.sh" script to set environment variables and run classes. So if you set any environment variable in"kafka-run-class.sh" script, then it will be applicable to all the scripts. So try to set different JMX_PORT in kafka-topics.sh. On Mon, Feb 8, 2016 at 9:24 PM,

Re: Detecting broker version programmatically

2016-02-04 Thread Manikumar Reddy
Currently it is available through JMX Mbean. It is not available on wire protocol/requests. Pending JIRAs related to this: https://issues.apache.org/jira/browse/KAFKA-2061 On Fri, Feb 5, 2016 at 4:31 AM, wrote: > Is there a way to detect the broker version (even at a high

Re: Detecting broker version programmatically

2016-02-04 Thread Manikumar Reddy
@James It is broker-id for Kafka server and client-id for java producer/consumer apps @Dana Yes, we can infer using custom logic.

Re: Producer code to a partition

2016-02-03 Thread Manikumar Reddy
umber? > > On Thu, Feb 4, 2016 at 7:17 AM, Manikumar Reddy <manikumar.re...@gmail.com > > > wrote: > > > Hi, > > > > You can use ProducerRecord(java.lang.String topic, java.lang.Integer > > partition, K key, V value) constructor > > to pass par

Re: Producer code to a partition

2016-02-03 Thread Manikumar Reddy
Hi, You can use ProducerRecord(java.lang.String topic, java.lang.Integer partition, K key, V value) constructor to pass partition number. https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html Kumar On Thu, Feb 4, 2016 at 11:41 AM, Joe San

Re: [VOTE] 0.8.2.2 Candidate 1

2015-09-09 Thread Manikumar Reddy
+1 (non-binding). verified the artifacts, quick start. On Wed, Sep 9, 2015 at 2:41 AM, Ashish wrote: > +1 (non-binding) > > Ran the build, works fine. All test cases passed > > On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao wrote: > > This is the first

Re: Query - Compression

2015-08-24 Thread Manikumar Reddy
Hi, If you are using producer's inbuilt compression (by setting compression.type property), then the consumer will auto decompress the data for you. Kumar On Mon, Aug 24, 2015 at 12:19 PM, ram kumar ramkumarro...@gmail.com wrote: Hi, If i compress the data in producer as snappy, while

Re: spark broadcast variable of Kafka producer throws ConcurrentModificationException

2015-08-18 Thread Manikumar Reddy
Hi, looks like the exception is occurring at kryo serialization. make sure you are not concurrently modifying java.util.Vector data structure. kumar On Wed, Aug 19, 2015 at 3:32 AM, Shenghua(Daniel) Wan wansheng...@gmail.com wrote: Hi, Did anyone see

Re: [DISCUSSION] Kafka 0.8.2.2 release?

2015-08-14 Thread Manikumar Reddy
+1 for 0.8.2.2 release On Fri, Aug 14, 2015 at 5:49 PM, Ismael Juma ism...@juma.me.uk wrote: I think this is a good idea as the change is minimal on our side and it has been tested in production for some time by the reporter. Best, Ismael On Fri, Aug 14, 2015 at 1:15 PM, Jun Rao

Re: logging for Kafka new producer

2015-08-11 Thread Manikumar Reddy
New producer uses SLF4J logging. We can configure any logging framework like log4j, java.util.logging and logback etc. On Tue, Aug 11, 2015 at 11:38 AM, Tao Feng fengta...@gmail.com wrote: Hi, I am wondering what Kafka new producer uses for logging. Is it log4j? Thanks, -Tao

Re: Partition and consumer configuration

2015-08-10 Thread Manikumar Reddy
Hi, 1. Will Kafka distribute the 100 serialized files randomly say 20 files go to Partition 1, 25 to Partition 2 etc or do I have an option to configure how many files go to which partition . Assuming you are using new producer, All keyed messages will be distributed based on the

Re: kafka benchmark tests

2015-07-14 Thread Manikumar Reddy
Yes, A list of Kafka Server host/port pairs to use for establishing the initial connection to the Kafka cluster https://kafka.apache.org/documentation.html#newproducerconfigs On Tue, Jul 14, 2015 at 7:29 PM, Yuheng Du yuheng.du.h...@gmail.com wrote: Does anyone know what is bootstrap.servers=

Re: How to run Kafka in background

2015-06-24 Thread Manikumar Reddy
You can pass -daemon config property to kafka startup script. ./kafka-server-start.sh -daemon ../config/server.1.properties On Wed, Jun 24, 2015 at 4:14 PM, bit1...@163.com bit1...@163.com wrote: Hi, I am using kafak 0.8.2.1 , and when I startup Kafka with the script: ./kafka-server-start.sh

Re: Issue with log4j Kafka Appender.

2015-06-18 Thread Manikumar Reddy
You can enable producer debug log and verify. In 0.8.2.0, you can set compressionType , requiredNumAcks, syncSend producer config properties to log4j.xml. Trunk build can take additional retries property . Manikumar On Thu, Jun 18, 2015 at 1:14 AM, Madhavi Sreerangam

Re: How to specify kafka bootstrap jvm options?

2015-06-17 Thread Manikumar Reddy
Most of the tuning options are available in kafka-run-class.sh. You can override required props (KAFKA_HEAP_OPTS , KAFKA_JVM_PERFORMANCE_OPTS) to kafka-server-start.sh script. On Wed, Jun 17, 2015 at 2:11 PM, luo.fucong bayinam...@gmail.com wrote: I want to tune the kafka jvm options, but

Re: Log compaction not working as expected

2015-06-16 Thread Manikumar Reddy
Hi, Your observation is correct. we never compact the active segment. Some improvements are proposed here, https://issues.apache.org/jira/browse/KAFKA-1981 Manikumar On Tue, Jun 16, 2015 at 5:35 PM, Shayne S shaynest...@gmail.com wrote: Some further information, and is this a bug?

Re: Log compaction not working as expected

2015-06-16 Thread Manikumar Reddy
is the last segment as opposed to the segment that would be written to if something were received right now. On Tue, Jun 16, 2015 at 8:38 AM, Manikumar Reddy ku...@nmsworks.co.in wrote: Hi, Your observation is correct. we never compact the active segment. Some improvements are proposed

Re: cannot make another partition reassignment due to the previous partition reassignment failure

2015-06-15 Thread Manikumar Reddy
Hi, Jut delete the /admin/reassign_partitions zk node for zookeeper and try again. #sh zookeeper-shell.sh localhost:2181 delete /admin/reassign_partitions Manikumar On Tue, Jun 16, 2015 at 8:15 AM, Yu Yang yuyan...@gmail.com wrote: HI, We have a kafka 0.8.1.1 cluster. Recently I did

Re: Producer RecordMetaData with Offset -1

2015-06-12 Thread Manikumar Reddy
Hi, What is the value set for acks config property? If acks=0 then the producer will not wait for any acknowledgment from the server and offset given back for each record will always be set to -1. Manikumar On Fri, Jun 12, 2015 at 7:17 PM, Gokulakannan M (Engineering - Data Platform)

Re: Kafka Rebalance on Watcher event Question

2015-05-11 Thread Manikumar Reddy
May 2015 at 11:06, Manikumar Reddy ku...@nmsworks.co.in wrote: If both C1,C2 belongs to same consumer group, then the re-balance will be triggered. A consumer subscribes to event changes of the consumer id registry within its group. On Mon, May 11, 2015 at 10:55 AM, dinesh kumar dinesh

Re: Kafka Rebalance on Watcher event Question

2015-05-10 Thread Manikumar Reddy
If both C1,C2 belongs to same consumer group, then the re-balance will be triggered. A consumer subscribes to event changes of the consumer id registry within its group. On Mon, May 11, 2015 at 10:55 AM, dinesh kumar dinesh...@gmail.com wrote: Hi, I am looking at the code of

Re: New producer: metadata update problem on 2 Node cluster.

2015-04-28 Thread Manikumar Reddy
Hi Ewen, Thanks for the response. I agree with you, In some case we should use bootstrap servers. If you have logs at debug level, are you seeing this message in between the connection attempts: Give up sending metadata request since no node is available Yes, this log came for couple

Re: New producer: metadata update problem on 2 Node cluster.

2015-04-27 Thread Manikumar Reddy
Any comments on this issue? On Apr 24, 2015 8:05 PM, Manikumar Reddy ku...@nmsworks.co.in wrote: We are testing new producer on a 2 node cluster. Under some node failure scenarios, producer is not able to update metadata. Steps to reproduce 1. form a 2 node cluster (K1, K2) 2. create

New Java Producer: Single Producer vs multiple Producers

2015-04-24 Thread Manikumar Reddy
We have a 2 node cluster with 100 topics. should we use a single producer for all topics or create multiple producers? What is the best choice w.r.t network load/failures, node failures, latency, locks? Regards, Manikumar

New producer: metadata update problem on 2 Node cluster.

2015-04-24 Thread Manikumar Reddy
We are testing new producer on a 2 node cluster. Under some node failure scenarios, producer is not able to update metadata. Steps to reproduce 1. form a 2 node cluster (K1, K2) 2. create a topic with single partition, replication factor = 2 3. start producing data (producer metadata : K1,K2) 2.

Re: New Java Producer: Single Producer vs multiple Producers

2015-04-24 Thread Manikumar Reddy
because batching dramatically reduces the number of requests (esp using the new java producer). -Jay On Fri, Apr 24, 2015 at 4:54 AM, Manikumar Reddy manikumar.re...@gmail.com wrote: We have a 2 node cluster with 100 topics. should we use a single producer for all topics or create

Re: Broker shuts down due to unrecoverable I/O error

2015-03-03 Thread Manikumar Reddy
Hi, We are running on RedHat Linux with SAN storage. This happened only once. Thanks, Manikumar. On Tue, Mar 3, 2015 at 10:02 PM, Jun Rao j...@confluent.io wrote: Which OS is this on? Is this easily reproducible? Thanks, Jun On Sun, Mar 1, 2015 at 8:24 PM, Manikumar Reddy ku

Broker shuts down due to unrecoverable I/O error

2015-03-01 Thread Manikumar Reddy
Kafka 0.8.2 server got stopped after getting below I/O exception. Any thoughts on below exception? Can it be file system related? [2015-03-01 14:36:27,627] FATAL [KafkaApi-0] Halting due to unrecoverable I/O error while handling produce request: (kafka.serv er.KafkaApis)

Re: How to measure performance metrics

2015-02-24 Thread Manikumar Reddy
Hi, There are bunch of metrics available for performance monitoring. These metrics are can be monitored by JMX monitoring tool (Jconsole). https://kafka.apache.org/documentation.html#monitoring. Some of the available metrics reporters are:

Re: Custom partitioner in kafka-0.8.2.0

2015-02-19 Thread Manikumar Reddy
Hi, In new producer, we can specify the partition number as part of ProducerRecord. From javadocs : *If a valid partition number is specified that partition will be used when sending the record. If no partition is specified but a key is present a partition will be chosen using a hash of the key.

Re: regarding custom msg

2015-02-09 Thread Manikumar Reddy
Can you post the exception stack-trace? On Mon, Feb 9, 2015 at 2:58 PM, Gaurav Agarwal gaurav130...@gmail.com wrote: hello We are sending custom message across producer and consumer. But getting class cast exception . This is working fine with String message and string encoder. But this did

Re: Not found NewShinyProducer sync performance metrics

2015-02-08 Thread Manikumar Reddy
Support * http://sematext.com/ On Thu, Feb 5, 2015 at 5:58 AM, Manikumar Reddy ku...@nmsworks.co.in wrote: New Producer uses Kafka's own metrics api. Currently metrics are reported using jmx. Any jmx monitoring tool (jconsole) can be used for monitoring. On Feb 5, 2015 3:56 PM, Xinyi

Re: one message consumed by both consumers in the same group?

2015-02-08 Thread Manikumar Reddy
Hi, bin/kafka-console-consumer.sh --. all the parameters are the same You need to set same group.id to create a consumer group. By default console consumer creates a random group.id. You can set group.id by using --consumer.config /tmp/comsumer.props flag. $$echo group.id=1

Re: Not found NewShinyProducer sync performance metrics

2015-02-05 Thread Manikumar Reddy
New Producer uses Kafka's own metrics api. Currently metrics are reported using jmx. Any jmx monitoring tool (jconsole) can be used for monitoring. On Feb 5, 2015 3:56 PM, Xinyi Su xiny...@gmail.com wrote: Hi, I am using kafka-producer-perf-test.sh to study NewShinyProducer *sync* performance.

Re: Potential socket leak in kafka sync producer

2015-01-29 Thread Manikumar Reddy
Hope you are closing the producers. can you share the attachment through gist/patebin On Fri, Jan 30, 2015 at 11:11 AM, ankit tyagi ankittyagi.mn...@gmail.com wrote: Hi Jaikiran, I am using ubuntu and was able to reproduce on redhat too. Please find the more information below.

Re: Missing Per-Topic BrokerTopicMetrics in v0.8.2.0

2015-01-27 Thread Manikumar Reddy
running locally. Jason On Mon, Jan 26, 2015 at 8:30 PM, Manikumar Reddy ku...@nmsworks.co.in wrote: If you are using multi-node cluster, then metrics may be reported from other servers. pl check all the servers in the cluster. On Tue, Jan 27, 2015 at 4:12 AM, Kyle Banker

Re: Missing Per-Topic BrokerTopicMetrics in v0.8.2.0

2015-01-26 Thread Manikumar Reddy
If you are using multi-node cluster, then metrics may be reported from other servers. pl check all the servers in the cluster. On Tue, Jan 27, 2015 at 4:12 AM, Kyle Banker kyleban...@gmail.com wrote: I've been using a custom KafkaMetricsReporter to report Kafka broker metrics to Graphite. In

Re: [kafka-clients] Re: [VOTE] 0.8.2.0 Candidate 2 (with the correct links)

2015-01-26 Thread Manikumar Reddy
+1 (Non-binding) Verified source package, unit tests, release build, topic deletion, compaction and random testing On Mon, Jan 26, 2015 at 6:14 AM, Neha Narkhede n...@confluent.io wrote: +1 (binding) Verified keys, quick start, unit tests. On Sat, Jan 24, 2015 at 4:26 PM, Joe Stein

Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Manikumar Reddy
Also Maven artifacts link is not correct On Wed, Jan 21, 2015 at 9:50 PM, Jun Rao j...@confluent.io wrote: Yes, will send out a new email with the correct links. Thanks, Jun On Wed, Jan 21, 2015 at 3:12 AM, Manikumar Reddy ku...@nmsworks.co.in wrote: All links are pointing to https

Re: [kafka-clients] [VOTE] 0.8.2.0 Candidate 2

2015-01-21 Thread Manikumar Reddy
All links are pointing to https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/. They should be https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/ right? On Tue, Jan 20, 2015 at 8:32 AM, Jun Rao j...@confluent.io wrote: This is the second candidate for release of Apache Kafka

Re: Consumer questions

2015-01-17 Thread Manikumar Reddy
a replay of the stream. The example is: KafkaStream.iterator(); which starts at wherever zookeeper recorded as where you left off. With the high level interface, can you request an iterator that starts at the very beginning? On Fri, Jan 16, 2015 at 8:55 PM, Manikumar Reddy ku

Re: dumping JMX data

2015-01-17 Thread Manikumar Reddy
JIRAs related to the issue are https://issues.apache.org/jira/browse/KAFKA-1680 https://issues.apache.org/jira/browse/KAFKA-1679 On Sun, Jan 18, 2015 at 3:12 AM, Scott Chapman sc...@woofplanet.com wrote: While I appreciate all the suggestions on other JMX related tools, my question is really

Re: Consumer questions

2015-01-16 Thread Manikumar Reddy
Hi, 1. In SimpleConsumer, you must keep track of the offsets in your application. In the example code, readOffset variable can be saved in redis/zookeeper. You should plugin this logic in your code. High Level Consumer stores the last read offset information in ZooKeeper. 2. You will

Re: Question on running Kafka Producer in Java environment

2015-01-16 Thread Manikumar Reddy
Pl check your classpath. Some jars might be missing. On Sat, Jan 17, 2015 at 7:41 AM, Su She suhsheka...@gmail.com wrote: Hello Everyone, Thank you for the time and help. I had the Kafka Producer running, but am having some trouble now. 1) Using Maven, I wrote a Kafka Producer similar to

Re: [VOTE] 0.8.2.0 Candidate 1

2015-01-15 Thread Manikumar Reddy
Also can we remove delete.topic.enable config property and enable topic deletion by default? On Jan 15, 2015 10:07 PM, Jun Rao j...@confluent.io wrote: Thanks for reporting this. I will remove that option in RC2. Jun On Thu, Jan 15, 2015 at 5:21 AM, Jaikiran Pai jai.forums2...@gmail.com

Re: Delete topic

2015-01-14 Thread Manikumar Reddy
I think now we should delete this config property and allow topic deletion in 0.8.2 Yep, you need to set delete.topic.enable=true. Forgot that step :) 2015-01-14 10:16 GMT-08:00 Jayesh Thakrar j_thak...@yahoo.com.invalid: Does one also need to set the config parameter delete.topic.enable to

Re: Configuring location for server (log4j) logs

2015-01-14 Thread Manikumar Reddy
you just need to set LOG_DIR property . All logs will be redirected to LOG_DIR directory. On Thu, Jan 15, 2015 at 11:49 AM, Shannon Lloyd shanl...@gmail.com wrote: By default Kafka writes its server logs into a logs directory underneath the installation root. I'm trying to override this to get

Re: Javadoc errors in MetricName when building with Java 8

2015-01-14 Thread Manikumar Reddy
Thanks for reporting this issue. We should be able to build on java 8. Will correct the javadocs. On Wed, Jan 14, 2015 at 9:26 AM, Shannon Lloyd shanl...@gmail.com wrote: Is Java 8 supported for building Kafka? Or do you only support Java 7? I just noticed that the latest code on the 0.8.2

Re: Get replication and partition count of a topic

2015-01-12 Thread Manikumar Reddy
Hi, kafka-topics.sh script can be used to retrieve topic information. Ex: sh kafka-topics.sh --zookeeper localhost:2181 --describe --topic TOPIC1 You can look into TopicCommand.scala code

Re: Kafka broker shutting down after running fine for 1-2 hours

2015-01-10 Thread Manikumar Reddy
Are you running kafka as a non-daemon process? If yes, there is a chance process getting killed, if we close terminal. On Sat, Jan 10, 2015 at 9:31 PM, Manikumar Reddy ku...@nmsworks.co.in wrote: Are you seeing any errors/exceptions? Can you paste Kafka log output? On Sat, Jan 10, 2015 at 2

Re: Kafka broker shutting down after running fine for 1-2 hours

2015-01-10 Thread Manikumar Reddy
Are you seeing any errors/exceptions? Can you paste Kafka log output? On Sat, Jan 10, 2015 at 2:42 PM, Kartik Singh kartiksi...@giveter.com wrote: Hello, We have just started using kafka. Our test setup consists of a single partition. We have integrated kafka to our system successfully with

Re: Kafka broker shutting down after running fine for 1-2 hours

2015-01-10 Thread Manikumar Reddy
Sorry.. i missed your link. On Sat, Jan 10, 2015 at 9:31 PM, Manikumar Reddy ku...@nmsworks.co.in wrote: Are you seeing any errors/exceptions? Can you paste Kafka log output? On Sat, Jan 10, 2015 at 2:42 PM, Kartik Singh kartiksi...@giveter.com wrote: Hello, We have just started using

Re: kafka monitoring

2015-01-08 Thread Manikumar Reddy
Hi, you need to set jmx remote port. you can set this by executing below line in terminal and start server. (or) add below line to kafka-run-class.sh and start server. export JMX_PORT= (jmx remote port) and connect jconsole by giving brokerip: On Fri, Jan 9, 2015 at 12:38 AM,

Re: ProducerData jar file

2014-12-11 Thread Manikumar Reddy
Hi, You just need to include the libraries available in kafka/libs folder. Pl follow below example https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+Producer+Example On Thu, Dec 11, 2014 at 4:43 PM, kishore kumar akishore...@gmail.com wrote: do i need to download this separately ? my

Re: ProducerData jar file

2014-12-11 Thread Manikumar Reddy
the jars available in libs folder, but this class is not available in that jars, I am using cloudera's CLABS-KAFKA. On Thu, Dec 11, 2014 at 4:55 PM, Manikumar Reddy ku...@nmsworks.co.in wrote: Hi, You just need to include the libraries available in kafka/libs folder. Pl follow below

Re: Pagecache cause OffsetOutOfRangeException

2014-12-02 Thread Manikumar Reddy
You can check the latest/earliest offsets of a given topic by running GetOffsetShell. https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-GetOffsetShell On Tue, Dec 2, 2014 at 2:05 PM, yuanjia8947 yuanjia8...@163.com wrote: Hi all, I'm using kafka 0.8.0 release now. And

Re: Kafka 0.8.2 log cleaner

2014-11-30 Thread Manikumar Reddy
Log cleaner does not support topics with compressed messages. https://issues.apache.org/jira/browse/KAFKA-1374 On Sun, Nov 30, 2014 at 5:33 PM, Mathias Söderberg mathias.soederb...@gmail.com wrote: Does the log cleaner in 0.8.2 support topics with compressed messages? IIRC that wasn't

Re: [DISCUSSION] adding the serializer api back to the new java producer

2014-11-25 Thread Manikumar Reddy
+1 for this change. what about de-serializer class in 0.8.2? Say i am using new producer with Avro and old consumer combination. then i need to give custom Decoder implementation for Avro right?. On Tue, Nov 25, 2014 at 9:19 PM, Joe Stein joe.st...@stealth.ly wrote: The serializer is an

  1   2   >