Is the broker configured with the correct ZK url and the right namespace?
Thanks,
Jun
On Fri, Jan 23, 2015 at 12:17 AM, Tim Smith secs...@gmail.com wrote:
Using kafka 0.8.1.1, the cluster had been healthy with producers and
consumers being able to function well. After a restart of the
Are you on the latest 0.8.2 branch? We did fix KAFKA-1738 a couple of
months ago which could prevent new topics from created.
Thanks,
Jun
On Thu, Jan 22, 2015 at 9:44 PM, Manu Zhang owenzhang1...@gmail.com wrote:
Hi all ,
My application creates kafka topic at runtime with
You need to sum the count from all brokers. Note that count is just one of
the attributes in the mbean. You can also get the rate (per min, per 5 min,
etc), hence the name MessagesInPerSec.
Thanks,
Jun
On Thu, Jan 22, 2015 at 10:54 PM, Bhavesh Mistry mistry.p.bhav...@gmail.com
wrote:
Hi
This could be related to KAFKA-1890, which just got fixed in trunk. Could
you try latest trunk?
Thanks,
Jun
On Fri, Jan 23, 2015 at 3:17 AM, tao xiao xiaotao...@gmail.com wrote:
Hi team,
I got NPE when running the latest mirror maker that is in trunk
[2015-01-23 18:55:20,229] INFO
Yes, we have already filed https://issues.apache.org/jira/browse/KAFKA-1887
to track this.
Thanks,
Jun
On Fri, Jan 23, 2015 at 5:30 AM, Manu Zhang owenzhang1...@gmail.com wrote:
Hi all,
I've been using KafkaServerTestHarness to integration test my kafka
application.
All the tests passed
Yes, in 0.8.2, each mirror maker instance will only be able to consumer
from one source. The reasoning for that is accepting consumers from
multiple sources complicates monitoring and configuration. If you have
multiple sources to mirror from, it's simpler to just run multiple
instances of mirror
In this case, we have a single shaded jar for our app for deployment (so
just 1 jar on the classpath). Could that be the issue? E.g. all dependent
jars are unpacked into a single jar within our deployment system
On Thu, Jan 22, 2015 at 6:11 PM, Jun Rao j...@confluent.io wrote:
Hmm,
I am running into some problems with Spark Streaming when reading from
Kafka.I used Spark 1.2.0 built on CDH5.
The example is based on:
https://github.com/apache/spark/blob/master/examples/scala-2.10/src/main/scala/org/apache/spark/examples/streaming/KafkaWordCount.scala
* It works with default
Hi, All
From my last ticket (Subject: kafka production server test), Guozhang
kindly point me the system test package come with kafka source build which
is really cool package. I took a look at this package, things are clear is
I run it on localhost, I don't need to change anything, say,
Sumit,
You can use AdminUtils.deleteTopic(zkClient, topicName) . This
will initiate the deleteTopic process by zookeeper notification
to KafkaController.deleteTopicManager. It deletes log files
along with zookeeper topic path as Timothy mentioned.
-Harsha
On Fri, Jan
Yes, that's probably the issue. If you repackage the Kafka jar, you need to
include the following in the repacked jar that was included in the original
Kafka jar. Our code looks for version info from there.
META-INF/
META-INF/MANIFEST.MF
Thanks,
Jun
On Fri, Jan 23, 2015 at 11:59 AM, Jason
Sorry this is meant to go to spark users. Ignore this thread.
On Fri, Jan 23, 2015 at 2:25 PM, Chen Song chen.song...@gmail.com wrote:
I am running into some problems with Spark Streaming when reading from
Kafka.I used Spark 1.2.0 built on CDH5.
The example is based on:
I believe that's the only way it's supported from the CLI.
Delete topic actually fully removes the topic from the cluster, which
also includes cleaning the logs and removing it from zookeeper (once
it is fully deleted).
Tim
On Fri, Jan 23, 2015 at 12:13 PM, Sumit Rangwala
Also I found ./kafka/system_test/cluster_config.json is duplicated on each
directory ./kafka/system_test/replication_testsuite/testcase_/
When I change the ./kafka/system_test/cluster_config.json, do I need to
overwrite it each
This is a reminder that the deadline for the vote is this Monday, Jan 26,
7pm PT.
Thanks,
Jun
On Wed, Jan 21, 2015 at 8:28 AM, Jun Rao j...@confluent.io wrote:
This is the second candidate for release of Apache Kafka 0.8.2.0. There
has been some changes since the 0.8.2 beta release,
1. Except for that hostname setting being a list instead of a single host,
the changes look reasonable. That is where you want to customize settings
for your setup.
2 3. Yes, you'll want to update those files as well. They top-level ones
provide defaults, the ones in specific test directories
The only impact is that you don't get the mbean that tells you the version
of the jar. That's why it's just a warning.
Thanks,
Jun
On Fri, Jan 23, 2015 at 1:04 PM, Jason Rosenberg j...@squareup.com wrote:
What are the ramifications if it can't find the version? It looks like it
uses it set
What are the ramifications if it can't find the version? It looks like it
uses it set a yammer Gauge metric. Anything more than that?
Jason
On Fri, Jan 23, 2015 at 3:24 PM, Jun Rao j...@confluent.io wrote:
Yes, that's probably the issue. If you repackage the Kafka jar, you need to
include
On Fri, Jan 23, 2015 at 2:42 PM, Sa Li sal...@gmail.com wrote:
Also I found ./kafka/system_test/cluster_config.json is duplicated on each
directory ./kafka/system_test/replication_testsuite/testcase_/
Not duplicated, but customized:
$ diff -u system_test/cluster_config.json
Thanks for reply. Ewen, pertaining to your statement ... hostname setting
being a list instead of a single host, are you saying entity_id 1 or 0,
entity_id: 0,
hostname:
10.100.70.28,10.100.70.29,10.100.70.30,10.100.70.31,10.100.70.32,
entity_id: 1,
hostname:
Thanks Harsha, exactly what I was looking for.
Sumit
On Fri, Jan 23, 2015 at 12:24 PM, Harsha ka...@harsha.io wrote:
Sumit,
You can use AdminUtils.deleteTopic(zkClient, topicName) . This
will initiate the deleteTopic process by zookeeper notification
to
Using kafka 0.8.1.1, the cluster had been healthy with producers and
consumers being able to function well. After a restart of the cluster, it
looks like consumers are locked out.
When I try to consume from a topic, I get this warning:
[2015-01-23 07:48:50,667] WARN
Hello,
We tried the ReassignPartitionsCommand to move partitions to new brokers.
The execution initially showed message Successfully started reassignment
of partitions But when I tried to verify using --verify option, it
reported some reassignments have failed:
ERROR: Assigned replicas
Thanks Jun.
My build has not included that fix. I'll try out the latest 0.8.2
On Sat Jan 24 2015 at 上午1:47:40 Jun Rao j...@confluent.io wrote:
Are you on the latest 0.8.2 branch? We did fix KAFKA-1738 a couple of
months ago which could prevent new topics from created.
Thanks,
Jun
On
Thanks Jun. Hope we could get that in soon
On Sat Jan 24 2015 at 上午1:50:57 Jun Rao j...@confluent.io wrote:
Yes, we have already filed https://issues.apache.org/
jira/browse/KAFKA-1887
to track this.
Thanks,
Jun
On Fri, Jan 23, 2015 at 5:30 AM, Manu Zhang owenzhang1...@gmail.com
I don't think so--see if you buy my explanation. We previously defaulted to
the byte array serializer and it was a source of unending frustration and
confusion. Since it wasn't a required config people just went along
plugging in whatever objects they had, and thinking that changing the
parametric
A kafka broker never pushes data to a consumer. It's the consumer that does
a long fetch and it provides the offset to read from.
The problem lies in how your consumer handles the for example 1000 messages
that it just got. If you handle 500 of them and crash without committing
the offsets
Thanks, i'm a newbie wrt kafka. im using kafka-spout.
here is the fail handler of kafka-spout, so to avoid replaying do i need
to remove below snipped from fail handler?.
Can you point me to official kafka-spout. can i use the one provided under
external folder.
if
Hi team,
I got NPE when running the latest mirror maker that is in trunk
[2015-01-23 18:55:20,229] INFO
[kafkatopic-1_LM-SHC-00950667-1422010513674-cb0bb562], exception during
rebalance (kafka.consumer.ZookeeperConsumerConnector)
java.lang.NullPointerException
at
Hi all,
I've been using KafkaServerTestHarness to integration test my kafka
application.
All the tests passed but when the server was teared down the error below
was thrown.
[ERROR] [01/23/2015 16:34:19.118] [logger] Controller 0 epoch 1 initiated
state change for partition [testTopic844927,0]
Hi Team,
I was playing around with your recent release 0.8.2-beta.
Producer worked fine whereas new consumer did not.
org.apache.kafka.clients.consumer.KafkaConsumer
After digging the code I realized that the implementation for the same is
not available. Only API is present.
Could you please
Manu,
Are you passing ZkStringSerializer to zkClient .
ZkClient zkClient = new ZkClient(ZK_CONN_STRING,3, 3, new
ZkStringSerializer());
AdminUtils.createTopic(zkClient, topic, 1, 1, props);
-Harsha
On Thu, Jan 22, 2015, at 09:44 PM, Manu Zhang wrote:
Hi all ,
My application
32 matches
Mail list logo