We are also seeing this problem with version 0.7.1 and logs on an XFS
partition. At our largest scale we can frequently free over 600GB of disk
usage by simply restarting Kafka. We've examined the `lsof` output from the
Kafka process and while it does appear to have FDs open for all log files
on
Btw, I've been running this patch in our cloud env and it's been working
fine so far.
I actually filed another bug as I saw another problem on windows locally (
https://issues.apache.org/jira/browse/KAFKA-1036).
Tim
On Wed, Aug 21, 2013 at 4:29 PM, Jay Kreps jay.kr...@gmail.com wrote:
That
This could certainly be done. It would be slightly involved since you would
need to implement some kind of file-handle cache for both indexes and log
files and re-open them on demand when a read occurs. If someone wants to
take a shot at this, the first step would be to get a design wiki in place
So guys, do we want to do these in 0.8? The first patch was a little
involved but I think it would be good to have windows support in 0.8 and it
sounds like Tim is able to get things working after these changes.
-Jay
On Mon, Sep 9, 2013 at 10:19 AM, Timothy Chen tnac...@gmail.com wrote:
Btw,
Thanks, Neha. That number of connections formula is very helpful.
Regards,
Libo
-Original Message-
From: Neha Narkhede [mailto:neha.narkh...@gmail.com]
Sent: Monday, September 09, 2013 12:17 PM
To: users@kafka.apache.org
Subject: Re: is it possible to commit offsets on a per stream
Hi Raja,
So just to summarize the scenario:
1) The consumer of mirror maker is successfully consuming all partitions of
the newly created topic.
2) The producer of mirror maker is not producing the new messages
immediately when the topic is created (observed from ProducerSendThread's
log).
3)
+1 for windows support on 0.8
Thanks,
Neha
On Mon, Sep 9, 2013 at 10:48 AM, Jay Kreps jay.kr...@gmail.com wrote:
So guys, do we want to do these in 0.8? The first patch was a little
involved but I think it would be good to have windows support in 0.8 and it
sounds like Tim is able to get
That's awesome! Thanks for taking the time to let us know...the nature of
infrastructure is that we usually only hear about things when they don't
work. :-)
It would be great for the project if you guys could do a blog post on your
setup. Also, any objections if I add you to our powered by page?
No problem at all -- please feel free to add our name to that page. The
marketing blurb is: Loggly is the world's most popular cloud-based log
management. Our cloud-based log management service helps DevOps and
technical teams make sense of the the massive quantity of logs that are
being produced
yes, the data exists in source cluster, but not in target cluster. I can't
replicate this problem in dev environment and it happens only in prod
environment. I turned on debug logging, but not able to identify the
problem. Basically, whenever I send data to new topic, I don't see any log
messages
Cool can we get a reviewer for KAFKA-1008 then? I can take on the other
issue for the checkpoint files.
-Jay
On Mon, Sep 9, 2013 at 3:16 PM, Neha Narkhede neha.narkh...@gmail.comwrote:
+1 for windows support on 0.8
Thanks,
Neha
On Mon, Sep 9, 2013 at 10:48 AM, Jay Kreps
I did take a look at KAFKA-1008 a while back and added some comments.
On 9/9/13 3:52 PM, Jay Kreps jay.kr...@gmail.com wrote:
Cool can we get a reviewer for KAFKA-1008 then? I can take on the other
issue for the checkpoint files.
-Jay
On Mon, Sep 9, 2013 at 3:16 PM, Neha Narkhede
Hi everyone,
I am trying to setup a Kafka cluster and have a couple of questions about
failover.
Has anyone deployed more than one zookeeper for a single Kafka cluster and have
high availability so if one zookeeper node goes down, the cluster automatically
fails over to a backup zookeeper
You want to setup a Zookeeper ensemble ( always an odd number of servers, three
is often acceptable )
http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_zkMulitServerSetup
And use Kafka 0.8 replication
http://kafka.apache.org/documentation.html#replication in addition if you want
I think Srirams complaint is that I haven't yet addressed his concerns :-)
Sent from my iPhone
On Sep 9, 2013, at 3:56 PM, Sriram Subramanian srsubraman...@linkedin.com
wrote:
I did take a look at KAFKA-1008 a while back and added some comments.
On 9/9/13 3:52 PM, Jay Kreps
Philip,
Thanks for posting this. Are you guys using 0.7 or 0.8?
Jun
On Mon, Sep 9, 2013 at 12:49 PM, Philip O'Toole phi...@loggly.com wrote:
Hello Kafka users and developers,
We at Loggly launched our new system last week, and Kafka is a critical
part. I just wanted to say a sincere
We are currently using 0.72.
I will provide more details in a future post, including partition
configuration, what kind of producers and consumers we use, how we use it,
etc.
Philip
On Mon, Sep 9, 2013 at 8:48 PM, Jun Rao jun...@gmail.com wrote:
Philip,
Thanks for posting this. Are you
Gotcha :)
Seems like this will be taken care of then.
Tim
On Mon, Sep 9, 2013 at 6:22 PM, Jay Kreps jay.kr...@gmail.com wrote:
I think Srirams complaint is that I haven't yet addressed his concerns :-)
Sent from my iPhone
On Sep 9, 2013, at 3:56 PM, Sriram Subramanian
18 matches
Mail list logo