Actually, thats a good point.
I don't think we are publishing our Scala / Java docs anywhere (well,
Jun Rao has the RC artifacts in his personal apache account:
https://people.apache.org/~junrao/)
Any reason we are not posting those to our docs SVN and linking our
website? Many Apache projects
Grr.. delete.topic.enable=true was wiped out at kafka restart from
server.properties. I have it working now.. thanks.
-Original Message-
From: Harsha [mailto:ka...@harsha.io]
Sent: Sunday, March 01, 2015 12:53 PM
To: users@kafka.apache.org
Subject: Re: Delete Topic in 0.8.2
Hi Hema,
On Fri, Feb 27, 2015 at 8:09 PM, Jeff Schroeder jeffschroe...@computer.org
wrote:
Kafka on dedicated hosts running in docker under marathon under Mesos. It
was a real bear to get working, but is really beautiful once I did manage
to get it working. I simply run with a unique hostname
Side question: why run kafka on docker for aws? Is the docker config being used
for configuration management? Are there more systems running on the instance
other than kafka?
Sent by Outlookhttp://taps.io/outlookmobile for Android
On Sun, Mar 1, 2015 at 1:10 PM -0800, Ewen Cheslack-Postava
Hi Theo,
seems like you found the answer yourself, the server may return partial
messages.
While parsing a MessageSet you simply ignore any message whose length
remainingBufferLength.
Regards,
Magnus
2015-03-01 8:55 GMT+01:00 Theo Hultberg t...@iconara.net:
That the message set size was
On Sun, Mar 1, 2015 at 1:46 AM, Guozhang Wang wangg...@gmail.com wrote:
Hi Honghai,
1. If a partition has no leader (i.e. all of its replicas are down) it will
become offline, and hence the metadata response will not have this
partition's info.
If I am understanding this correctly, then
That is what I am using. The problem is when I run it the CPU spikes on the
broker I am running it from. I just wanted to know if there was a different way.
Gene
Sent from my iPhone
On Feb 28, 2015, at 10:46 PM, Guozhang Wang wangg...@gmail.com wrote:
If it is ZK based offset commit, you
Slightly different from what I observed.
Broker box has 800GB disk space. By setting the appropriate log retention,
it's supposed to hold the log size. But then the usage of disk hits 90%,
and by doing nothing but restarting broker server. It free 40% disk space.
It's for sure the speed of the
Did you check if log.delete.delay.ms is set to 6?
Guozhang
On Sun, Mar 1, 2015 at 9:00 PM, Guangle Fan fanguan...@gmail.com wrote:
Slightly different from what I observed.
Broker box has 800GB disk space. By setting the appropriate log retention,
it's supposed to hold the log size. But
If you are using ZK based offset commit, you have to read the offset from
ZK. If you can make code change, one potential improvement is to reuse
ZKClient as explained below.
Currently, ConsumerOffsetChecker only takes one consumer group for each
run. If you have a lot of consumer groups to check,
One thing to remember is that the .index files are memory-mapped [1]
which in Java means that the file descriptors may not be released even
when the program is done using it. A garbage collection is expected to
close such resources, but forcing a System.gc() is only a hint and thus
doesn't
Evan,
In the java producer, partition id of the message is determined in the
send() call and then the data is appended to the corresponding batch buffer
(one buffer for each partition), i.e. the partition id will never change
once it is decided. If the partition becomes offline after this, the
My concern is more with the partitioner that determines the partition of
the message. IIRC, it does something like hash(key) mod #partitions in
the normal case, which means if the # of partitions changes because some of
them are offline, then certain messages will be sent to the wrong (online)
I see.
If you need to make sure messages are going to the same partition during
broker bouncing / failures, then you should not depend on the partitioner
to decide the partition id but explicitly set it before calling send().
For example, you can use the total number of partitions for the topic,
Kafka 0.8.2 server got stopped after getting below I/O exception.
Any thoughts on below exception? Can it be file system related?
[2015-03-01 14:36:27,627] FATAL [KafkaApi-0] Halting due to unrecoverable
I/O error while handling produce request: (kafka.serv
er.KafkaApis)
Ivan,
From your description it seems Kafka stores source of truth of the data
and the k-v store is constructed via consuming from Kafka, right? In that
case time/size-based data retention policy is usually not preferred as it
may delete data out of expectation while people are querying the k-v
Which I think is my point - based on my current understanding, there is
*no* way to find out the total number of partitions for a topic besides
hard-coding it or manually reading it from zookeeper. The kafka metadata
API does not reliably expose that information.
Evan
On Sun, Mar 1, 2015 at
2015-03-01 18:41 GMT+03:00 Jay Kreps jay.kr...@gmail.com:
They are mutually exclusive. Can you expand on the motivation/use for
combining them?
Thanks, Jay
Let's say we need to build key-value storage semantically connected to
the data that also stored in kafka.
Once the particular pieces of
Hi,
Do I understand correctly that compaction and deletion are currently
mutually exclusive?
Is it possible to compact recent segments and delete older ones,
according to general deletion policies?
Thanks,
2014-11-30 15:10 GMT+03:00 Manikumar Reddy ku...@nmsworks.co.in:
Log cleaner does not
Thank you for confirming my suspicion.
T#
On Sun, Mar 1, 2015 at 9:48 AM, Magnus Edenhill mag...@edenhill.se wrote:
Hi Theo,
seems like you found the answer yourself, the server may return partial
messages.
While parsing a MessageSet you simply ignore any message whose length
They are mutually exclusive. Can you expand on the motivation/use for
combining them?
-Jay
On Sunday, March 1, 2015, Ivan Balashov ibalas...@gmail.com wrote:
Hi,
Do I understand correctly that compaction and deletion are currently
mutually exclusive?
Is it possible to compact recent
Pretty please, can someone add a link to the scaladoc API reference
for the the current release?
http://kafka.apache.org/documentation.html
On Sat, Feb 28, 2015 at 9:31 PM, Guozhang Wang wangg...@gmail.com wrote:
Is this you are looking for?
http://kafka.apache.org/07/documentation.html
On
I upgraded my kafka server to 0.8.2 and client to use 0.8.2 as well..
I am trying to test delete topic feature and I see that delete topic do not
work consistently.
I saw it working fine a first few times but after a while I saw that deleting
topics add them to delete_topic node in admin folder
I think currently you can issue delete markers (tombstones) for the keys.
That will delete the data associated with the respective keys during
compaction. But the keys still will exist in the log.
Thanks,
Mayuresh
On Sun, Mar 1, 2015 at 8:07 AM, Ivan Balashov ibalas...@gmail.com wrote:
What do you mean by We have a dedicated broker that has no leader
partitions. Are you running anything else on that machine? I think you can
run that tool Guozhang from any machine and don't require it to be a kafka
Broker.
Thanks,
Mayuresh
On Sun, Mar 1, 2015 at 5:44 AM, Gene Robichaux
Also I suppose when the broker starts up it will remove the files that are
marked with suffix .deleted and that's why you can see the free disk space
on restarting. Guozhang can correct me if I am wrong.
Thanks,
Mayuresh
On Sat, Feb 28, 2015 at 9:27 PM, Guozhang Wang wangg...@gmail.com wrote:
Hi Hema, Can you attach controller.log and state-change.log. Image is
not showing up at least for me. Can you also give us details on how big
the cluster is and topic's partitions and replication-factor and any
steps on reproducing this. Thanks, Harsha
On Sun, Mar 1, 2015, at 12:40 PM, Hema
27 matches
Mail list logo