What is your use case ? Why do you want such persistence in Kafka ? For
such persistence I think you should use cassandra /mongo db type of no sql
db.
thanks
Ashutosh
On Wed, Jun 1, 2016 at 9:06 AM, VG wrote:
> Hi,
>
> There are number of messages floating on the internet
http://docs.confluent.io/2.0.0/connect/connect-hdfs/docs/index.html
On Wed, Apr 27, 2016 at 1:59 PM, Mudit Kumar wrote:
> Hi,
>
> I have a running kafka setup with 3 brokers.Now i want to sink all kafka
> to write to hdfs.My hadoop cluster is already up and running.
> Any
http://www.confluent.io/blog/how-to-build-a-scalable-etl-pipeline-with-kafka-connect
On Wed, Apr 27, 2016 at 1:59 PM, Mudit Kumar wrote:
> Hi,
>
> I have a running kafka setup with 3 brokers.Now i want to sink all kafka
> to write to hdfs.My hadoop cluster is already up
I have not doe it myself but did you look at
http://henning.kropponline.de/2015/11/15/kafka-security-with-kerberos/
And what did you try , what problems /errors you faced ?If you put those
things , it can fetch better response
Thanks
Ashutosh
On Tue, Jan 5, 2016 at 6:57 PM,
I have installed on local instance of ubuntu and it works fine.
On Mon, Apr 20, 2015 at 11:17 AM, sunil kalva kalva.ka...@gmail.com wrote:
Hi
Any one tried running zookeeper and kafka locally , which can be useful for
automating the test cases for API built on kafka ?
SunilKalva
I think you need to re balance the cluster.
something like
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181
--topics-to-move-json-file topics-to-move.json --broker-list 5,6
--generate
On Mon, Apr 13, 2015 at 11:22 AM, shadyxu shad...@gmail.com wrote:
I added several new brokers to
that no partitions
were needed to be removed. Was my json file not properly configured?
2015-04-13 14:00 GMT+08:00 Ashutosh Kumar kmr.ashutos...@gmail.com:
I think you need to re balance the cluster.
something like
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181
--topics