looks like you have not changed the default data log directory. By default
kafka is configured to store the data logs to /tmp/ folder. /tmp gets
cleared
on system reboots. change log.dirs config property to some other directory.
On Thu, Sep 15, 2016 at 11:46 AM, Ali Akhtar
congrats, Jason!
On Wed, Sep 7, 2016 at 9:28 AM, Ashish Singh wrote:
> Congrats, Jason!
>
> On Tuesday, September 6, 2016, Jason Gustafson wrote:
>
> > Thanks all!
> >
> > On Tue, Sep 6, 2016 at 5:13 PM, Becket Qin >
This doc link may help:
http://kafka.apache.org/documentation.html#new_producer_monitoring
On Fri, Aug 19, 2016 at 2:36 AM, David Yu wrote:
> Kafka users,
>
> I want to resurface this post since it becomes crucial for our team to
> understand our recent Samza throughput
Hi,
Can you enable Authorization debug logs and check for logs related to
denied operations..
we should also enable operations on Cluster resource.
Thanks,
Manikumar
On Thu, Aug 4, 2016 at 1:51 AM, Bryan Baugher wrote:
> Hi everyone,
>
> I was trying out kerberos on Kafka
Hi,
There are two versions of slf4j-log4j jar in the build. (1.6.1, 1.7.21).
slf4j-log4j12-1.6.1.jar is coming from streams:examples module.
Thanks,
Manikumar
On Tue, Aug 2, 2016 at 8:31 PM, Ismael Juma wrote:
> Hello Kafka users, developers and client-developers,
>
> This
many delete topic functionality related issues got fixed in latest
versions. It highly recommend to move to latest version.
https://issues.apache.org/jira/browse/KAFKA-1757 fixes similar issue on
windows platform.
On Thu, Jul 28, 2016 at 3:40 PM, Ghosh, Prabal Kumar <
prabal.kumar.gh...@sap.com>
You already got reply from Guozhang on dev mailing list.
On Thu, Jul 28, 2016 at 7:09 AM, Pierre Coquentin <
pierre.coquen...@gmail.com> wrote:
> Hi,
>
> I've a simple technical question about kafka streams.
> In class org.apache.kafka.streams.processor.internals.StreamTask, the
> method
also check if any value set for log.retention.bytes broker config
On Wed, Jul 27, 2016 at 8:03 PM, Samuel Taylor wrote:
> Is it possible that your log directory is in /tmp/ and your OS is deleting
> that directory? I know it's happened to me before.
>
> - Samuel
>
> On
are a) upgrade b) backport the patch yourself. b) seems extremely risky to
> me
>
> Thanks
>
> Tom
>
> On Tue, Jul 19, 2016 at 5:49 AM, Manikumar Reddy <
> manikumar.re...@gmail.com>
> wrote:
>
> > Try increasing log cleaner threads.
> >
> > On Tue, Ju
>at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> >[2016-06-24 09:57:39,881] INFO [kafka-log-cleaner-thread-0], Stopped
> (kafka.log.LogCleaner)
> >
> >
> >Is log.cleaner.dedupe.buffer.size a broker setting? What is a good
> number to set it to?
&
Hi,
Which Kafka version you are using?
SASL/PLAIN support is available from Kafka 0.10.0.0 release onwards.
Thanks
Manikumar
On Fri, Jul 15, 2016 at 4:22 PM, cs user wrote:
> Apologies, just to me clear, my broker settings are actually as below,
> using PLAINTEXT
?
>
> Thanks again!
>
>
> Lawrence Weikum
>
> On 7/13/16, 10:34 AM, "Manikumar Reddy" <manikumar.re...@gmail.com> wrote:
>
> Hi,
>
> Are you seeing any errors in log-cleaner.log? The log-cleaner thread can
> crash on certain errors.
>
> Thank
Hi,
Are you seeing any errors in log-cleaner.log? The log-cleaner thread can
crash on certain errors.
Thanks
Manikumar
On Wed, Jul 13, 2016 at 9:54 PM, Lawrence Weikum
wrote:
> Hello,
>
> We’re seeing a strange behavior in Kafka 0.9.0.1 which occurs about every
> other
Hi,
consumer.subscribe(Pattern p , ..) method implementation tries to get
metadata of all the topics.
This will throw TopicAuthorizationException on internal topics and other
unauthorized topics.
We may need to move the pattern matching to sever side.
Is this know issue?. If not, I will raise
Hi,
Kafka internally creates the offsets topic (__consumer_offsets) with
compact mode on.
>From 0.9.0.1 onwards log.cleaner.enable=true by default. This means topics
with a
cleanup.policy=compact will now be compacted by default,
You can tweak the offset topic configuration by using below
I agree with Harsha and Marcus. Many of the kafka users are still on java 7
and
some of them definitely upgrade to newer versions. We may need to support
for a while.
We can remove the support from next major version onwards.
Thanks,
Manikumar
On Fri, Jun 17, 2016 at 2:04 PM, Marcus Gründler
ed in meta.properties. Am i right?
>
> Thanks
>
> On Thu, May 19, 2016 at 7:14 PM, Manikumar Reddy <
> manikumar.re...@gmail.com>
> wrote:
>
> > Auto broker id generation logic:
> > 1. If there is a user provided broker.id, then it is used and id range
> is
Hi,
commitId is nothing but latest git commit hash of the release. This is taken
while building binary distribution. commitId is avilable in binary release
(kafka_2.10-0.10.0.0.tgz)
commitId will not be available if you build from source release
(kafka-0.10.0.0-src.tgz).
On Wed, May 18, 2016 at
Auto broker id generation logic:
1. If there is a user provided broker.id, then it is used and id range is
from 0 to reserved.broker.max.id
2. If there is no user provided broker.id, then auto id generation starts
from reserved.broker.max.id +1
3. broker.id is stored in meta.properties file under
Hi,
This is known issue. Check below links for related discussion
https://issues.apache.org/jira/browse/KAFKA-3494
https://qnalist.com/questions/6420696/discuss-mbeans-overwritten-with-identical-clients-on-a-single-jvm
Manikumar
On Wed, May 11, 2016 at 7:29 PM, Paul Mackles
Hi,
Are you enabling log compaction on a topic with compressed messages?
If yes, then that might be the reason for the exception. 0.8.2.2 Log
Compaction does
not support compressed messages. This got fixed in 0.9.0.0 (KAFKA-1641,
KAFKA-1374)
Check below mail thread for some corrective
This book can help you:
Kafka: The Definitive Guide (
http://shop.oreilly.com/product/0636920044123.do)
On Thu, Apr 21, 2016 at 9:38 PM, Mudit Agarwal
wrote:
> Hi,
> Any recommendations for any online guide/link on managing/Administration
> of kafka cluster.
>
Did you set broker config property log.cleanup.policy=compact or topic
level property cleanup.policy=compact ?
On Thu, Apr 21, 2016 at 7:16 PM, Kasim Doctor wrote:
> Hi everyone,
>
> I have a cluster of 5 brokers with Kafka 2.10_0.8.2.1 and one of the
> topics compacted
Hi,
log compaction related JMX metric object names are given below.
kafka.log:type=LogCleaner,name=cleaner-recopy-percent
kafka.log:type=LogCleaner,name=max-buffer-utilization-percent
kafka.log:type=LogCleaner,name=max-clean-time-secs
kafka.log:type=LogCleanerManager,name=max-dirty-percent
Hi,
kafka.log:type=LogCleaner,name=cleaner-recopy-percent
kafka.log:type=LogCleanerManager,name=max-dirty-percent
kafka.log:type=LogCleaner,name=max-clean-time-secs
After every compaction cycle, we also print some useful statistics to
logs/log-cleaner.log file.
On Wed, Apr 13, 2016 at 7:16
t; Oleg
> > On Apr 12, 2016, at 9:22 AM, Manikumar Reddy <manikumar.re...@gmail.com>
> wrote:
> >
> > New consumer config property "max.poll.records" is getting introduced
> in
> > upcoming 0.10 release.
> > This property can be used to contro
New consumer config property "max.poll.records" is getting introduced in
upcoming 0.10 release.
This property can be used to control the no. of records in each poll.
Manikumar
On Tue, Apr 12, 2016 at 6:26 PM, Oleg Zhurakousky <
ozhurakou...@hortonworks.com> wrote:
> Is there a way to specify
Hi,
Producer message size validation checks ("buffer.memory",
"max.request.size" ) happens before
batching and sending messages. Retry mechanism is applicable for broker
side errors and network errors.
Try changing "message.max.bytes" broker config property for simulating
broker side error.
Hi,
1. New config property "max.poll.records" is getting introduced in
upcoming 0.10 release.
This property can be used to control the no. of records in each poll.
2. We can use the combination of ExecutorService/Processing Thread and
Pause/Resume API to handle unwanted rebalances.
Some
Hi,
1. Your topic partitions are not replicated (replication factor =1).
Increase replication factor for better fault tolerance.
With proper replication, Kafka Brokers/Producers can handle node
failures without data loss.
2. Looks like Kafka brokers are not in a cluster. They might
Yes. your scenarios are easy to implement using Kafka. Pl go through Kafka
documentation and examples for better
understanding of Kafka concepts, use cases and design.
https://kafka.apache.org/documentation.html
https://github.com/apache/kafka/tree/trunk/examples
On Tue, Mar 29, 2016 at 9:20 AM,
Hi,
You need to implement org.apache.kafka.common.serialization.Serializer,
org.apache.kafka.common.serialization.Deserializer
interfaces. Encoder, Decoder interfaces are for older clients.
Example code:
https://github.com/omkreddy/kafka-example
A consumer can belong to only one consumer group.
https://kafka.apache.org/documentation.html#intro_consumers
On Mon, Mar 28, 2016 at 11:01 AM, Vinod Kakad wrote:
> Hi,
>
> I wanted to know if same consumer can be in two consumer groups.
>
> OR
>
> How the multiple topic
It will continue from the latest offset. offset is a increasing, contiguous
sequence number per partition.
On Mon, Mar 28, 2016 at 9:11 AM, Imre Nagi wrote:
> Hi All,
>
> I'm new in kafka. So, I have a question related to kafka offset.
>
> From the kafka documentation
to do the clustering in Storm or Spark Streaming afterwards?
>
> Thank you in advance.
>
> Regards,
> Daniela
>
>
>
> Gesendet: Mittwoch, 23. März 2016 um 09:42 Uhr
> Von: "Manikumar Reddy" <ku...@nmsworks.co.in>
> An: "users@kafka.apache.org" <u
Hi,
1. Based on your design, it can be one or more topics. You can design one
topic per region or
one topic for all region devices.
2. Yes, you need to listen to web socket messages and write to kafka
server using kafka producer.
In your use case, you can also send messages using Kafka
Hi,
you can use librdkafka C library for producing data.
https://github.com/edenhill/librdkafka
Manikumar
On Wed, Mar 23, 2016 at 12:41 PM, Shashidhar Rao wrote:
> Hi,
>
> Can someone help me with reading data from sensors and storing into Kafka.
>
> At the
We may get few warning exceptions, on first produce to unknown topic , with
default server config property auto.create.topics.enable = true. If this is
the case, then it is harmless exception.
On Sun, Mar 20, 2016 at 11:19 AM, Mohamed Ashiq
wrote:
> All,
>
> I am
DumpLogSegments tool is used to dump partition data logs (not application
logs).
Usage:
./bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files
/tmp/kafka-logs/TEST-TOPIC-0/.log
Use --key-decoder-class , --key-decoder-class options to pass
deserializers.
On Fri, Mar
18, 2016 at 12:31 PM, Manikumar Reddy <ku...@nmsworks.co.in>
wrote:
> DumpLogSegments tool is used to dump partition data logs (not application
> logs).
>
> Usage:
> ./bin/kafka-run-class.sh kafka.tools.DumpLogSegments --files
> /tmp/kafka-logs/TEST-TOPIC-0/000
Hi,
These logs are minor GC logs and they look normal. Look for the word 'Full'
for full gc log details.
On Sun, Mar 13, 2016 at 3:06 PM, li jinyu wrote:
> I'm using Kafka 0.8.1.1, have 10 nodes in a cluster, all are started with
> default command:
>
We need to pass "--new-consumer" property to kafka-consumer-groups.sh
command to use new consumer.
sh kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list
--new-consumer
On Thu, Mar 10, 2016 at 12:02 PM, Rajiv Kurian wrote:
> Hi Guozhang,
>
> I tried using
kafka scripts uses "kafka-run-class.sh" script to set environment variables
and run classes. So if you set any environment variable
in"kafka-run-class.sh" script, then it will be applicable to all the
scripts. So try to set different JMX_PORT in kafka-topics.sh.
On Mon, Feb 8, 2016 at 9:24 PM,
Currently it is available through JMX Mbean. It is not available on wire
protocol/requests.
Pending JIRAs related to this:
https://issues.apache.org/jira/browse/KAFKA-2061
On Fri, Feb 5, 2016 at 4:31 AM, wrote:
> Is there a way to detect the broker version (even at a high
@James
It is broker-id for Kafka server and client-id for java
producer/consumer apps
@Dana
Yes, we can infer using custom logic.
umber?
>
> On Thu, Feb 4, 2016 at 7:17 AM, Manikumar Reddy <manikumar.re...@gmail.com
> >
> wrote:
>
> > Hi,
> >
> > You can use ProducerRecord(java.lang.String topic, java.lang.Integer
> > partition, K key, V value) constructor
> > to pass par
Hi,
You can use ProducerRecord(java.lang.String topic, java.lang.Integer
partition, K key, V value) constructor
to pass partition number.
https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/producer/ProducerRecord.html
Kumar
On Thu, Feb 4, 2016 at 11:41 AM, Joe San
+1 (non-binding). verified the artifacts, quick start.
On Wed, Sep 9, 2015 at 2:41 AM, Ashish wrote:
> +1 (non-binding)
>
> Ran the build, works fine. All test cases passed
>
> On Thu, Sep 3, 2015 at 9:22 AM, Jun Rao wrote:
> > This is the first
Hi,
If you are using producer's inbuilt compression (by
setting compression.type property),
then the consumer will auto decompress the data for you.
Kumar
On Mon, Aug 24, 2015 at 12:19 PM, ram kumar ramkumarro...@gmail.com wrote:
Hi,
If i compress the data in producer as snappy,
while
Hi,
looks like the exception is occurring at kryo serialization. make sure
you are not concurrently modifying java.util.Vector data structure.
kumar
On Wed, Aug 19, 2015 at 3:32 AM, Shenghua(Daniel) Wan wansheng...@gmail.com
wrote:
Hi,
Did anyone see
+1 for 0.8.2.2 release
On Fri, Aug 14, 2015 at 5:49 PM, Ismael Juma ism...@juma.me.uk wrote:
I think this is a good idea as the change is minimal on our side and it has
been tested in production for some time by the reporter.
Best,
Ismael
On Fri, Aug 14, 2015 at 1:15 PM, Jun Rao
New producer uses SLF4J logging. We can configure any logging framework
like log4j, java.util.logging and logback etc.
On Tue, Aug 11, 2015 at 11:38 AM, Tao Feng fengta...@gmail.com wrote:
Hi,
I am wondering what Kafka new producer uses for logging. Is it log4j?
Thanks,
-Tao
Hi,
1. Will Kafka distribute the 100 serialized files randomly say 20 files go
to Partition 1, 25 to Partition 2 etc or do I have an option to configure
how many files go to which partition .
Assuming you are using new producer,
All keyed messages will be distributed based on the
Yes, A list of Kafka Server host/port pairs to use for establishing the
initial connection to the Kafka cluster
https://kafka.apache.org/documentation.html#newproducerconfigs
On Tue, Jul 14, 2015 at 7:29 PM, Yuheng Du yuheng.du.h...@gmail.com wrote:
Does anyone know what is bootstrap.servers=
You can pass -daemon config property to kafka startup script.
./kafka-server-start.sh -daemon ../config/server.1.properties
On Wed, Jun 24, 2015 at 4:14 PM, bit1...@163.com bit1...@163.com wrote:
Hi,
I am using kafak 0.8.2.1 , and when I startup Kafka with the script:
./kafka-server-start.sh
You can enable producer debug log and verify. In 0.8.2.0, you can set
compressionType
, requiredNumAcks, syncSend producer config properties to log4j.xml. Trunk
build can take additional retries property .
Manikumar
On Thu, Jun 18, 2015 at 1:14 AM, Madhavi Sreerangam
Most of the tuning options are available in kafka-run-class.sh. You can
override required props (KAFKA_HEAP_OPTS , KAFKA_JVM_PERFORMANCE_OPTS) to
kafka-server-start.sh script.
On Wed, Jun 17, 2015 at 2:11 PM, luo.fucong bayinam...@gmail.com wrote:
I want to tune the kafka jvm options, but
Hi,
Your observation is correct. we never compact the active segment.
Some improvements are proposed here,
https://issues.apache.org/jira/browse/KAFKA-1981
Manikumar
On Tue, Jun 16, 2015 at 5:35 PM, Shayne S shaynest...@gmail.com wrote:
Some further information, and is this a bug?
is the last segment as opposed to the segment that would be written to if
something were received right now.
On Tue, Jun 16, 2015 at 8:38 AM, Manikumar Reddy ku...@nmsworks.co.in
wrote:
Hi,
Your observation is correct. we never compact the active segment.
Some improvements are proposed
Hi,
Jut delete the /admin/reassign_partitions zk node for zookeeper and
try again.
#sh zookeeper-shell.sh localhost:2181
delete /admin/reassign_partitions
Manikumar
On Tue, Jun 16, 2015 at 8:15 AM, Yu Yang yuyan...@gmail.com wrote:
HI,
We have a kafka 0.8.1.1 cluster. Recently I did
Hi,
What is the value set for acks config property?
If acks=0 then the producer will not wait for any acknowledgment from the
server and
offset given back for each record will always be set to -1.
Manikumar
On Fri, Jun 12, 2015 at 7:17 PM, Gokulakannan M (Engineering - Data
Platform)
May 2015 at 11:06, Manikumar Reddy ku...@nmsworks.co.in wrote:
If both C1,C2 belongs to same consumer group, then the re-balance will be
triggered.
A consumer subscribes to event changes of the consumer id registry within
its group.
On Mon, May 11, 2015 at 10:55 AM, dinesh kumar dinesh
If both C1,C2 belongs to same consumer group, then the re-balance will be
triggered.
A consumer subscribes to event changes of the consumer id registry within
its group.
On Mon, May 11, 2015 at 10:55 AM, dinesh kumar dinesh...@gmail.com wrote:
Hi,
I am looking at the code of
Hi Ewen,
Thanks for the response. I agree with you, In some case we should use
bootstrap servers.
If you have logs at debug level, are you seeing this message in between the
connection attempts:
Give up sending metadata request since no node is available
Yes, this log came for couple
Any comments on this issue?
On Apr 24, 2015 8:05 PM, Manikumar Reddy ku...@nmsworks.co.in wrote:
We are testing new producer on a 2 node cluster.
Under some node failure scenarios, producer is not able
to update metadata.
Steps to reproduce
1. form a 2 node cluster (K1, K2)
2. create
We have a 2 node cluster with 100 topics.
should we use a single producer for all topics or create multiple
producers?
What is the best choice w.r.t network load/failures, node failures,
latency, locks?
Regards,
Manikumar
We are testing new producer on a 2 node cluster.
Under some node failure scenarios, producer is not able
to update metadata.
Steps to reproduce
1. form a 2 node cluster (K1, K2)
2. create a topic with single partition, replication factor = 2
3. start producing data (producer metadata : K1,K2)
2.
because batching dramatically reduces the number of
requests (esp using the new java producer).
-Jay
On Fri, Apr 24, 2015 at 4:54 AM, Manikumar Reddy
manikumar.re...@gmail.com
wrote:
We have a 2 node cluster with 100 topics.
should we use a single producer for all topics or create
Hi,
We are running on RedHat Linux with SAN storage. This happened only once.
Thanks,
Manikumar.
On Tue, Mar 3, 2015 at 10:02 PM, Jun Rao j...@confluent.io wrote:
Which OS is this on? Is this easily reproducible?
Thanks,
Jun
On Sun, Mar 1, 2015 at 8:24 PM, Manikumar Reddy ku
Kafka 0.8.2 server got stopped after getting below I/O exception.
Any thoughts on below exception? Can it be file system related?
[2015-03-01 14:36:27,627] FATAL [KafkaApi-0] Halting due to unrecoverable
I/O error while handling produce request: (kafka.serv
er.KafkaApis)
Hi,
There are bunch of metrics available for performance monitoring. These
metrics are can be monitored
by JMX monitoring tool (Jconsole).
https://kafka.apache.org/documentation.html#monitoring.
Some of the available metrics reporters are:
Hi,
In new producer, we can specify the partition number as part of
ProducerRecord.
From javadocs :
*If a valid partition number is specified that partition will be used when
sending the record. If no partition is specified but a key is present a
partition will be chosen using a hash of the key.
Can you post the exception stack-trace?
On Mon, Feb 9, 2015 at 2:58 PM, Gaurav Agarwal gaurav130...@gmail.com
wrote:
hello
We are sending custom message across producer and consumer. But
getting class cast exception . This is working fine with String
message and string encoder.
But this did
Support * http://sematext.com/
On Thu, Feb 5, 2015 at 5:58 AM, Manikumar Reddy ku...@nmsworks.co.in
wrote:
New Producer uses Kafka's own metrics api. Currently metrics are
reported
using jmx. Any jmx monitoring tool (jconsole) can be used for
monitoring.
On Feb 5, 2015 3:56 PM, Xinyi
Hi,
bin/kafka-console-consumer.sh --.
all the parameters are the same
You need to set same group.id to create a consumer group. By default
console consumer creates a random group.id.
You can set group.id by using --consumer.config /tmp/comsumer.props
flag.
$$echo group.id=1
New Producer uses Kafka's own metrics api. Currently metrics are reported
using jmx. Any jmx monitoring tool (jconsole) can be used for monitoring.
On Feb 5, 2015 3:56 PM, Xinyi Su xiny...@gmail.com wrote:
Hi,
I am using kafka-producer-perf-test.sh to study NewShinyProducer *sync*
performance.
Hope you are closing the producers. can you share the attachment through
gist/patebin
On Fri, Jan 30, 2015 at 11:11 AM, ankit tyagi ankittyagi.mn...@gmail.com
wrote:
Hi Jaikiran,
I am using ubuntu and was able to reproduce on redhat too. Please find the
more information below.
running locally.
Jason
On Mon, Jan 26, 2015 at 8:30 PM, Manikumar Reddy ku...@nmsworks.co.in
wrote:
If you are using multi-node cluster, then metrics may be reported from
other servers.
pl check all the servers in the cluster.
On Tue, Jan 27, 2015 at 4:12 AM, Kyle Banker
If you are using multi-node cluster, then metrics may be reported from
other servers.
pl check all the servers in the cluster.
On Tue, Jan 27, 2015 at 4:12 AM, Kyle Banker kyleban...@gmail.com wrote:
I've been using a custom KafkaMetricsReporter to report Kafka broker
metrics to Graphite. In
+1 (Non-binding)
Verified source package, unit tests, release build, topic deletion,
compaction and random testing
On Mon, Jan 26, 2015 at 6:14 AM, Neha Narkhede n...@confluent.io wrote:
+1 (binding)
Verified keys, quick start, unit tests.
On Sat, Jan 24, 2015 at 4:26 PM, Joe Stein
Also Maven artifacts link is not correct
On Wed, Jan 21, 2015 at 9:50 PM, Jun Rao j...@confluent.io wrote:
Yes, will send out a new email with the correct links.
Thanks,
Jun
On Wed, Jan 21, 2015 at 3:12 AM, Manikumar Reddy ku...@nmsworks.co.in
wrote:
All links are pointing to
https
All links are pointing to
https://people.apache.org/~junrao/kafka-0.8.2.0-candidate1/.
They should be https://people.apache.org/~junrao/kafka-0.8.2.0-candidate2/
right?
On Tue, Jan 20, 2015 at 8:32 AM, Jun Rao j...@confluent.io wrote:
This is the second candidate for release of Apache Kafka
a replay
of the stream. The example is:
KafkaStream.iterator();
which starts at wherever zookeeper recorded as where you left off.
With the high level interface, can you request an iterator that starts at
the very beginning?
On Fri, Jan 16, 2015 at 8:55 PM, Manikumar Reddy ku
JIRAs related to the issue are
https://issues.apache.org/jira/browse/KAFKA-1680
https://issues.apache.org/jira/browse/KAFKA-1679
On Sun, Jan 18, 2015 at 3:12 AM, Scott Chapman sc...@woofplanet.com wrote:
While I appreciate all the suggestions on other JMX related tools, my
question is really
Hi,
1. In SimpleConsumer, you must keep track of the offsets in your
application.
In the example code, readOffset variable can be saved in
redis/zookeeper.
You should plugin this logic in your code. High Level Consumer stores
the last
read offset information in ZooKeeper.
2. You will
Pl check your classpath. Some jars might be missing.
On Sat, Jan 17, 2015 at 7:41 AM, Su She suhsheka...@gmail.com wrote:
Hello Everyone,
Thank you for the time and help. I had the Kafka Producer running, but am
having some trouble now.
1) Using Maven, I wrote a Kafka Producer similar to
Also can we remove delete.topic.enable config property and enable topic
deletion by default?
On Jan 15, 2015 10:07 PM, Jun Rao j...@confluent.io wrote:
Thanks for reporting this. I will remove that option in RC2.
Jun
On Thu, Jan 15, 2015 at 5:21 AM, Jaikiran Pai jai.forums2...@gmail.com
I think now we should delete this config property and allow topic deletion
in 0.8.2
Yep, you need to set delete.topic.enable=true.
Forgot that step :)
2015-01-14 10:16 GMT-08:00 Jayesh Thakrar j_thak...@yahoo.com.invalid:
Does one also need to set the config parameter delete.topic.enable to
you just need to set LOG_DIR property . All logs will be redirected to
LOG_DIR directory.
On Thu, Jan 15, 2015 at 11:49 AM, Shannon Lloyd shanl...@gmail.com wrote:
By default Kafka writes its server logs into a logs directory underneath
the installation root. I'm trying to override this to get
Thanks for reporting this issue. We should be able to build on java 8. Will
correct the javadocs.
On Wed, Jan 14, 2015 at 9:26 AM, Shannon Lloyd shanl...@gmail.com wrote:
Is Java 8 supported for building Kafka? Or do you only support Java 7? I
just noticed that the latest code on the 0.8.2
Hi,
kafka-topics.sh script can be used to retrieve topic information.
Ex: sh kafka-topics.sh --zookeeper localhost:2181 --describe --topic TOPIC1
You can look into TopicCommand.scala code
Are you running kafka as a non-daemon process?
If yes, there is a chance process getting killed, if we close terminal.
On Sat, Jan 10, 2015 at 9:31 PM, Manikumar Reddy ku...@nmsworks.co.in
wrote:
Are you seeing any errors/exceptions? Can you paste Kafka log output?
On Sat, Jan 10, 2015 at 2
Are you seeing any errors/exceptions? Can you paste Kafka log output?
On Sat, Jan 10, 2015 at 2:42 PM, Kartik Singh kartiksi...@giveter.com
wrote:
Hello,
We have just started using kafka. Our test setup consists of a single
partition. We have integrated kafka to our system successfully with
Sorry.. i missed your link.
On Sat, Jan 10, 2015 at 9:31 PM, Manikumar Reddy ku...@nmsworks.co.in
wrote:
Are you seeing any errors/exceptions? Can you paste Kafka log output?
On Sat, Jan 10, 2015 at 2:42 PM, Kartik Singh kartiksi...@giveter.com
wrote:
Hello,
We have just started using
Hi,
you need to set jmx remote port.
you can set this by executing below line in terminal and start server.
(or) add below line to kafka-run-class.sh and start server.
export JMX_PORT= (jmx remote port)
and connect jconsole by giving brokerip:
On Fri, Jan 9, 2015 at 12:38 AM,
Hi,
You just need to include the libraries available in kafka/libs folder.
Pl follow below example
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+Producer+Example
On Thu, Dec 11, 2014 at 4:43 PM, kishore kumar akishore...@gmail.com
wrote:
do i need to download this separately ? my
the jars available in libs folder,
but this class is not available in that jars, I am using cloudera's
CLABS-KAFKA.
On Thu, Dec 11, 2014 at 4:55 PM, Manikumar Reddy ku...@nmsworks.co.in
wrote:
Hi,
You just need to include the libraries available in kafka/libs folder.
Pl follow below
You can check the latest/earliest offsets of a given topic by running
GetOffsetShell.
https://cwiki.apache.org/confluence/display/KAFKA/System+Tools#SystemTools-GetOffsetShell
On Tue, Dec 2, 2014 at 2:05 PM, yuanjia8947 yuanjia8...@163.com wrote:
Hi all,
I'm using kafka 0.8.0 release now. And
Log cleaner does not support topics with compressed messages.
https://issues.apache.org/jira/browse/KAFKA-1374
On Sun, Nov 30, 2014 at 5:33 PM, Mathias Söderberg
mathias.soederb...@gmail.com wrote:
Does the log cleaner in 0.8.2 support topics with compressed messages? IIRC
that wasn't
+1 for this change.
what about de-serializer class in 0.8.2? Say i am using new producer with
Avro and old consumer combination.
then i need to give custom Decoder implementation for Avro right?.
On Tue, Nov 25, 2014 at 9:19 PM, Joe Stein joe.st...@stealth.ly wrote:
The serializer is an
1 - 100 of 103 matches
Mail list logo