What version of zookeeper are you running?
First check to see if there is a znode for the /admin/reassign_partitions in
zookeeper.
If so, you could try a graceful shutdown of the controller broker.
Once the new controller leader elects on another broker look at zk the
znode
The reassign partition process only completes after the new replicas are
fully caught up and the old replicas are deleted. So, if the old replica is
down, the process can never complete, which is what you observed. In your
case, if you just want to replace a broker host with a new one, instead of
Thank you, Neha. I appreciate your help.
--
*Have a nice day.*
Regards,
Aniket Kulkarni.
Hi,
I've noticed an interesting behaviour which I hope someone can fully
explain.
I have 3 Kafka Node cluster with a setting of log.retention.hours=168 (7
days) and log.segment.bytes=536870912.
I recently restarted one of the nodes and it's uptime is now 3 days behind
than the other 2.
After
Thanks, Jay,
Here is what I did this morning, I git clone the latest version of kafka
from git, (I am currently using kafka 8.0) now it is 8.1.1, and it use
gradle to build project. I am having trouble to build it. I installed
gradle, and run ./gradlew jar in kafka root directory, it comes out:
Having the same question: what happened to 0.8.2 release, when it's
supposed to happen?
Thanks.
On Tue, Sep 30, 2014 at 12:49 PM, Jonathan Weeks
jonathanbwe...@gmail.com wrote:
I was one asking for 0.8.1.2 a few weeks back, when 0.8.2 was at least 6-8
weeks out.
If we truly believe that
Hi, all
Here I want to run example code associated with kafka package, I run as
readme says:
To run the demo using scripts:
+
+ 1. Start Zookeeper and the Kafka server
+ 2. For simple consumer demo, run bin/java-simple-consumer-demo.sh
+ 3. For unlimited producer-consumer run, run
Hello Sa,
KAFKA-1490 introduces a new step of downloading the wrapper, details are
included in the latest README file.
Guozhang
On Thu, Oct 2, 2014 at 11:00 AM, Sa Li sal...@gmail.com wrote:
Thanks, Jay,
Here is what I did this morning, I git clone the latest version of kafka
from git, (I
Hello Dayo,
This is a known issue, since today Kafka's log rolling / cleaning policy
depends on the creation timestamp of the segment files, which could be
modified upon partition migration / broker restart, it can cause the server
to not honor the specified log cleaning config. Some more details
Yes, here is a vagrant virtual box setup
https://github.com/stealthly/scala-kafka
On Thu, Oct 2, 2014 at 3:51 PM, Mingtao Zhang mail2ming...@gmail.com
wrote:
Thanks for the response!
Any one has it working on Virtualbox? which is the case for Winddos/Mac?
How do we configure the network
Thanks Guozhang
I tried this as in KAFKA-1490:
git clone https://git-wip-us.apache.org/repos/asf/kafka.git
cd kafka
gradle
but fails to build:
FAILURE: Build failed with an exception.
* Where:
Script '/home/stuser/trunk/gradle/license.gradle' line: 2
* What went wrong:
A problem
I git clone the latest kafka package, why can't I build the package
gradle
FAILURE: Build failed with an exception.
* Where:
Script '/home/ubuntu/kafka/gradle/license.gradle' line: 2
* What went wrong:
A problem occurred evaluating script.
Could not find method create() for arguments
I can't really gradle through, even clone the latest trunk, anyone having
same issue?
On Thu, Oct 2, 2014 at 1:55 PM, Sa Li sal...@gmail.com wrote:
Thanks Guozhang
I tried this as in KAFKA-1490:
git clone https://git-wip-us.apache.org/repos/asf/kafka.git
cd kafka
gradle
but fails
Did you installed gradle as the README stated?
You need to have [gradle](http://www.gradle.org/installation) installed.
Guozhang
On Thu, Oct 2, 2014 at 1:55 PM, Sa Li sal...@gmail.com wrote:
Thanks Guozhang
I tried this as in KAFKA-1490:
git clone
Daniel, thanks for reply
It is still the learn curve to me to setup the cluster, we finally want to
make connection between kafka cluster and storm cluster. As you mentioned,
seems 1 single broker per node is more efficient, is it good to handle
multiple topics? For my case, say I can build the
Just clarify, I am using 3 zkServer ensemble, myid: 1, 2, 3. But in each
kafka node server.properties of each broker, I make zk.connect to
localhost, which means the broker info stored in local zkServer, I know it
is bit of weird, other than assign the broker info automatically by
zkServer leader.
Hello Apache Kafka community,
auto.create.topics.enable configuration option docs state:
Enable auto creation of topic on the server. If this is set to true then
attempts to produce, consume, or fetch metadata for a non-existent topic
will automatically create it with the default replication
We already cut an 0.8.2 release branch. The plan is to have the remaining
blockers resolved before releasing it. Hopefully this will just take a
couple of weeks.
Can you follow the example in quickstart (
http://kafka.apache.org/documentation.html#quickstart)?
Thanks,
Jun
On Thu, Oct 2, 2014 at 12:01 PM, Sa Li sal...@gmail.com wrote:
Hi, all
Here I want to run example code associated with kafka package, I run as
readme says:
To run the demo using
Hmm, not sure what the issue is. You can also just copy the following files
from the 0.8.1 branch.
gradle/wrapper/
gradle-wrapper.jar gradle-wrapper.properties
Thanks,
Jun
On Thu, Oct 2, 2014 at 2:05 PM, Sa Li sal...@gmail.com wrote:
I git clone the latest kafka package, why can't I
Thank you all, I am able to gradle now, here is my mistake, I install
gradle by apt-get, and from gradle web, but system automatically pick
apt-get gradle to run, and this version is quite outdated, what I did to
apt-get remove gradle, and add higher version gradle to /etc/environment,
now it
In general, only writers should trigger auto topic creation, but not the
readers. So, a topic can be auto created by the producer, but not the
consumer.
Thanks,
Jun
On Thu, Oct 2, 2014 at 2:44 PM, Stevo Slavić ssla...@gmail.com wrote:
Hello Apache Kafka community,
auto.create.topics.enable
22 matches
Mail list logo