It seems I often call api code not under the javaapi package as well
(primarily in our container code which embeds a KafkaServer instance), but
also in some client code. Thus, it seems the javaapi is not meant to be a
single-point of entry, no?
Jason
On Thu, Nov 29, 2012 at 4:31 PM, Jay Kreps
Hi Neha,
Can you describe the migration tool you mention below, for copying data
from 0.7 to 0.8? Is this something provided with 0.8? Or do apps need to
write custom migration tools?
Thanks,
Jason
On Tue, Jan 15, 2013 at 11:06 AM, Neha Narkhede neha.narkh...@gmail.comwrote:
Broadly, the
I suspect this is not currently supported, but it seems to be for us a real
use case.
If we have a topic that is no longer receiving messages, and all messages
have been removed from the brokers, after the log_retention_hours has
expired, I'd love to have it then automatically remove the topic
it. This requires some API that
will expose the data size of a topic on a broker, which we don't have right
now. It might be worth thinking about it. Do you mind posting your
suggestions for this tool on that JIRA ?
Thanks,
Neha
On Wed, Mar 6, 2013 at 3:15 PM, Jason Rosenberg j...@squareup.com
to migrate data between clusters.
What's the procedure for deleting topics in kafka 0.7.2?
There is no official way to delete a topic. You have to delete the data
directory and bounce the brokers to take note of that.
Thanks,
Neha
On Wed, Mar 6, 2013 at 4:20 PM, Jason Rosenberg j
On Wed, Mar 6, 2013 at 6:07 PM, Neha Narkhede neha.narkh...@gmail.comwrote:
In 0.8, this is controlled by the auto.create.topics.enable config on the
brokers. If this is set to true, topics will be created when a topic
metadata request is sent for a new topic. This feature is provided to aid
:13 PM, Jason Rosenberg j...@squareup.com wrote:
Thanks Neha,
So are you saying that on 0.7.2, to delete a topic I need only remove
it's
data log directory from each broker, and the restart the brokers? Is it
ok
if it's a rolling restart?
For some reason I thought I also had to do
that log line is redundant, I think it is removed in 0.8
Thanks,
Neha
On Thu, Mar 14, 2013 at 2:17 PM, Jason Rosenberg j...@squareup.com wrote:
Also,
I see a bazillion consecutive log lines like this:
2013-03-14 19:54:13,306 INFO [Thread-4] consumer.ConsumerIterator -
Clearing
topic exists in ZK? It
should have no children.
/brokers/topics/[topic]
If this is the case, try manually removing those paths from ZK (when the
brokers and the consumers are down).
Thanks,
Jun
On Thu, Mar 14, 2013 at 2:03 PM, Jason Rosenberg j...@squareup.com wrote:
Hi Neha,
So I
could be left by the
producers if they haven't been restarted. Could you also use zkCli.sh to
see if deleted topics are there in ZK?
Thanks,
Jun
On Fri, Mar 15, 2013 at 2:19 PM, Jason Rosenberg j...@squareup.com wrote:
Jun,
So, I connected to zookeeper just using telnet, and using the 4
:
Are you using ZK-based producer? If so, those watches could be left by
the
producers if they haven't been restarted. Could you also use zkCli.sh
to
see if deleted topics are there in ZK?
Thanks,
Jun
On Fri, Mar 15, 2013 at 2:19 PM, Jason Rosenberg j...@squareup.com
wrote
/consumers/appname/owners.
Thanks,
Jun
On Sun, Mar 17, 2013 at 11:11 PM, Jason Rosenberg j...@squareup.com
wrote:
Jun,
There are indeed no nodes under /brokers/topics/deletedtopic
Also, do I need to remove the deleted apps from the
/consumers/appname/owners path?
So, should
I need to upgrade some kafka broker servers. So I need to seamlessly
migrate traffic from the old brokers to the new ones, without losing data,
and without stopping producers. I can temporarily stop consumers, etc.
Is there a strategy for this?
Also, because of the way we are embedding kafka
wrote:
On Wed, Mar 20, 2013 at 12:06 PM, Jason Rosenberg j...@squareup.com
wrote:
On Wed, Mar 20, 2013 at 12:00 PM, Philip O'Toole phi...@loggly.com
wrote:
For
producers, also, you can't really use a load-balancer to connect to
brokers
(you can use zk, or you can use a broker list
It looks like there is a race condition between the settings for the 2
properties: log.default.flush.scheduler.interval.ms
log.default.flush.interval.ms. I'm using 0.7.2.
By default, both of these get set to 3000ms (and in the docs, it
recommends setting flushInterval to be a multiple of the
wrote:
Hi, the mail list has been moved to users@kafka.apache.org
javascript:;long time ago.
You may resubscribe the list or the list may be removed in future.
At 2013-03-28 15:39:07,Jason Rosenberg j...@squareup.com javascript:;
wrote:
Hi,
We are managing our kafka clusters by doing
flush last segment and close file channel
So, it does what you are suggesting. All unflushed data are supposed to be
written to disk on a clean shutdown.
Thanks,
Jun
On Thu, Mar 28, 2013 at 8:10 AM, Jason Rosenberg j...@squareup.com wrote:
Hi Neha,
I enabled TRACE logging
,
Jun
On Thu, Mar 28, 2013 at 12:43 AM, Jason Rosenberg j...@squareup.com
wrote:
It looks like there is a race condition between the settings for the 2
properties: log.default.flush.scheduler.interval.ms
log.default.flush.interval.ms. I'm using 0.7.2.
By default, both of these get
are fixed in 0.8.
Thanks,
Neha
On Thu, Mar 28, 2013 at 11:44 AM, Jason Rosenberg j...@squareup.com
wrote:
Ok, sorry,
I see now that in fact, it does close all logs during LogManager.close(),
which deeper in the code flushes logSegments. So it doesn't do so as
explicitly
Essentially,
There's a configuration property: log.retention.hours
This determines the minimum time a message will remain available on the
broker. The default is 7 days.
The kafka broker doesn't keep track of whether the message has been
consumed or not (or how many times it has been
Jun,
Can you clarify, will the maven integration be available for the 0.8 beta
release, or only the final release?
Jason
On Thu, Apr 4, 2013 at 7:26 AM, Jun Rao jun...@gmail.com wrote:
Not yet, but will be for the 0.8 release.
Thanks,
Jun
On Thu, Apr 4, 2013 at 5:29 AM, Oleg Ruchovets
Ok,
This is a feature I've been hoping for, so I added an upvote to Kafka-330.
But I will defer to you in terms of not wanting to delay 0.8 unnecessarily.
Will we still have a backhanded way to remove a topic if need be?
Ultimately, I'd like to see the feature where a topic automatically is
Will this issue be fixed in 0.8?
On Wed, Apr 17, 2013 at 7:40 AM, 王国栋 wangg...@gmail.com wrote:
Thanks Neha.
Yes, I believe we run into this issue.
We will try this patch. Currently, make topic-partition directory manually
is OK for us.
Guodong
On Wed, Apr 17, 2013 at 9:24 PM, Neha
I'm interested in the same topic (similar use case).
What I think would be nice too (and this has been discussed a bit in the
past on this list), would be to have ssl support within the kafka protocol.
Zookeeper also doesn't support ssl, but at least now, in 0.8, producing
clients no longer
By the way, is there a reason why 'log.roll.hours' is not documented on the
apache configuration page: http://kafka.apache.org/configuration.html ?
It's possible to find this setting (and several other undocumented
settings) by looking at the source code. I'm just not sure why the
complete set
So, we have lots of apps producing messages to our kafka 0.7.2 instances
(and multiple consumers of the data).
We are not going to be able to follow the suggested migration path, where
we first migrate all data, then move all producers to use 0.8, etc.
Instead, many apps are on their own release
requirement. If there are fewer applications producing to and consuming
from any particular topic, you can group together those and push them at
roughly the same time.
Thanks,
Neha
On May 1, 2013 8:44 PM, Jason Rosenberg j...@squareup.com wrote:
So, we have lots of apps producing messages
Recently, we had an issue where our kafka brokers were shut down hard (and
so did not write out the clean shutdown file). Thus on restart, it went
through all logs and ran a recovery on them.
Unfortunately, this took a long time (on the order of 30 minutes). We have
a lot of topics (e.g. ~1000
I just tried to go through the quickstart, step by step on a Mac. I got
the same thing (LeaderNotAvailableException).
On Thu, Apr 25, 2013 at 9:30 PM, Jun Rao jun...@gmail.com wrote:
Thanks. Is anyone able to run the 0.8 quickstart without this issue on
Windows?
Jun
On Thu, Apr 25, 2013
Is there a maven repo we can point to, to just depend on the kafka 0.8
core, and have all the dependencies get pulled in as needed?
For some reason, I had thought this would be available as part of the 0.8
release
Or do I need to manually create a pom.xml for the core, and host it on my
=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
On Mon, May 06, 2013 at 04:31:50PM -0700, Jason Rosenberg wrote:
Is there a maven repo we can point to, to just depend on the kafka 0.8
core, and have all the dependencies get pulled in as needed?
For some reason, I had thought this would be available as part
'kafka_0.8.0.pom' or
'kakfa_0.8.0-SNAPSHOT.pom', etc...
Jason
On Tue, May 7, 2013 at 1:48 PM, Jason Rosenberg j...@squareup.com wrote:
Alex,
Thanks for the tip, this is exactly what I need.
Jason
On Tue, May 7, 2013 at 11:58 AM, Gray, Alex alex.g...@inin.com wrote:
Hi Jason,
I don't know
:26, Jason Rosenberg wrote:
Except that it still has the annoying feature of naming the kafka version
after the scala version, which doesn't make sense, e.g.:
~/.ivy2/local/org.apache/kafka_2.8.0/0.8.0-SNAPSHOT/poms ls -l
total 24
-rw-r--r-- 1 jbr jbr 3786 May 7 13:43 kafka_2.8.0.pom
I'm porting some unit tests from 0.7.2 to 0.8.0. The test does the
following, all embedded in the same java process:
-- spins up a zk instance
-- spins up a kafka server using a fresh log directory
-- creates a producer and sends a message
-- creates a high-level consumer and verifies that it
, Jun Rao jun...@gmail.com wrote:
Yes, both are expected.
Thanks,
Jun
On Wed, May 8, 2013 at 12:16 AM, Jason Rosenberg j...@squareup.com wrote:
I'm porting some unit tests from 0.7.2 to 0.8.0. The test does the
following, all embedded in the same java process:
-- spins up a zk
I'm seeing this issue with a single node zk instance, on my localhost. If
my zkconnect is localhost:12345, it works...
but if I add a chroot, e.g.: localhost:12345/xyz, I get the same error:
java.lang.IllegalArgumentException: Path length must be 0
I also get the error if I do:
It works if I manually create the chroot first. But this is a bit
cumbersome if I want to do an automated roll out to multiple deployments,
etcbut workable
Should I file a jira?
On Wed, May 8, 2013 at 4:31 PM, Jason Rosenberg j...@squareup.com wrote:
I'm seeing this issue
With 0.8.0, I'm seeing that an initial metadata request fails, if the
number of running brokers is fewer than the configured replication factor:
877 [kafka-request-handler-0] ERROR kafka.server.KafkaApis -
[KafkaApi-1946108683] Error while retrieving topic metadata
, 2013 at 9:55 AM, Jason Rosenberg j...@squareup.com wrote:
If expected, does it make sense to log them as exceptions as such? Can
we
instead log something meaningful to the console, like:
No leader was available, one will now be created
or
ConsumerConnector has shutdown
etc
It looks like by default, the first time a new message arrives for a given
topic, it will receive the default replication factor in place on the
broker at the time it is first received.
Is it possible to change this later (e.g. say if we add more hardware to
the cluster at a later date, etc.)?
at 10:15 PM, Jason Rosenberg j...@squareup.com wrote:
With 0.8.0, I'm seeing that an initial metadata request fails, if the
number of running brokers is fewer than the configured replication
factor:
877 [kafka-request-handler-0] ERROR kafka.server.KafkaApis -
[KafkaApi-1946108683] Error
Hi,
I'm wondering if there's a good way to have a heterogenous kafka cluster
(specifically, if we have nodes with different sized disks). So, we might
want a larger node to receive more messages than a smaller node, etc.
I expect there's something we can do with using a partitioner that has
and explicitly specify the replica
to broker mapping. Post 0.8, we can think of some more automated ways to
deal with this (e.g., let each broker carry some kind of weight).
Thanks,
Jun
On Fri, May 17, 2013 at 2:29 PM, Jason Rosenberg j...@squareup.com wrote:
Hi,
I'm wondering
Hi,
I am seeing an unexpected situation. My producers use a zkconnection
string to connect to kafka (this is still 0.7.2). If one of the zk hosts
is taken down and removed from dns, it causes an UnknownHostException, and
the producer can't initialize. I expect this is different than the less
to
resolve every host on startup, you have to make sure that is possible.
Thanks,
Neha
On Tue, May 21, 2013 at 11:09 AM, Jason Rosenberg j...@squareup.com
wrote:
Hi,
I am seeing an unexpected situation. My producers use a zkconnection
string to connect to kafka (this is still 0.7.2
Normally, I see 2-4 log segments deleted every hour in my brokers. I see
log lines like this:
2013-05-23 04:40:06,857 INFO [kafka-logcleaner-0] log.LogManager -
Deleting log segment 035434043157.kafka from redacted topic
However, it seems like if I restart the broker, a massive amount
So, does this indicate kafka (or the jvm itself) is not aggressively
closing file handles of deleted files? Is there a fix for this? Or is
there not likely anything to be done? What happens if the disk fills up
with file handles for phantom deleted files?
Jason
On Wed, May 22, 2013 at 9:50
Kafka deleted them. I haven't noticed
this on our systems but we haven't looked for it either.
Is anything outside of Kafka deleting or reading those files?
On May 23, 2013 1:17 AM, Jason Rosenberg j...@squareup.com wrote:
So, does this indicate kafka (or the jvm itself) is not aggressively
With 0.8, we now have ack levels when sending messages. I'm wondering how
this applies when sending messages in async mode. Are there any guarantees
at least that each async batch will wait for the requested ack level before
sending the next batch?
I assume there is still a disconnect between
In this case, does the consumer code need to change, account for the
compression (or is this handled automatically in the consumer apis?).
If we decide to start compressing messages to a topic, can a consumer
seamlessly move between the transition from uncompressed to compressed?
Jason
On Tue,
Try the one under core/targets?
On Tue, Jun 11, 2013 at 3:34 PM, Florin Trofin ftro...@adobe.com wrote:
I downloaded the latest 0.8 snapshot and I want to build using Maven:
./sbt make-pom
Generates a bunch of pom.xml files but when I try to open one of them in
IntelliJ they are not
of the zookeeper
settings:
It was renamed from zk.connect to zookeeper.connect.
You should check all of the settings because other setting names have
changed as well.
Cheers,
Eric Sites
On 6/16/13 5:14 PM, Jason Rosenberg j...@squareup.com wrote:
I've started having problems with the latest version
it together or
whatever amount of detailed information you can provide and then please
open up a JIRA ticket https://issues.apache.org/jira/browse/KAFKA
Thanks!
On Sun, Jun 16, 2013 at 11:14 PM, Jason Rosenberg j...@squareup.com
wrote:
Yep,
The configs are good. And my apps are working
On Sun, Jun 16, 2013 at 11:36 PM, Jason Rosenberg j...@squareup.com
wrote:
Joe,
So I am using the 2.8.2 build of the kafka jar, using that latest
beta1-candidate1 tag.
The code above should be all you need to reproduce the issue. I'll
create
a JIRA ticket.
Thanks,
Jason
at 4:48 PM, Jason Rosenberg j...@squareup.com wrote:
Ok,
So it seems the issue is related somehow to how I've wrapped the server
(using a container app, using the maven pom from ./sbt make-pom). If I
start the server using the script kafka-server-start.sh, it works fine.
Still looking
/server.properties been vetted for
the new config values?
Jason
On Mon, Jun 17, 2013 at 5:23 PM, Joe Stein crypt...@gmail.com wrote:
you can use this to build the binary distributable
./sbt release-tar
On Mon, Jun 17, 2013 at 8:17 PM, Jason Rosenberg j...@squareup.com wrote:
Looking
, Chief Architect
http://www.medialets.com
Twitter: @allthingshadoop
Mobile: 917-597-9771
*/
On Jun 17, 2013, at 8:41 PM, Jason Rosenberg j...@squareup.com wrote:
Joe,
I there also a way to generate a sources jar via sbt?
Thanks,
Jason
On Mon, Jun 17, 2013 at 5:28 PM, Jason
suspect there's an underlying problem).
Jason
On Mon, Jun 17, 2013 at 6:05 PM, Jason Rosenberg j...@squareup.com wrote:
Hmmmthat's not working for me (no *.sources.jar files are generated).
do I need to add a flag?
On Mon, Jun 17, 2013 at 5:47 PM, Joe Stein crypt...@gmail.com wrote:
Yup
I'm wondering why the default setting for auto.offset.reset in the
ConsumerConfig class was changed from 'smallest' to 'largest', so late in
the game (looks like a commit on June 3 changed the default). This is an
extremely major change, I should think. Consumers now by default only get
messages
that they don't get too many duplicates (there
could be a small number of message loss for those consumers). (2) This
matches the default behavior of console consumer which is the first thing
that most new users experience. Does that make sense?
Thanks,
Jun
On Tue, Jun 18, 2013 at 9:02 AM, Jason
Was just reading about Controlled Shutdown here:
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools
Is this something that can be invoked from code, from within a container
running the KafkaServer?
Currently I launch kafka.server.KafkaServer directly from our java app
container.
).
Thanks,
Jun
On Wed, Jun 19, 2013 at 6:32 PM, Jason Rosenberg j...@squareup.com wrote:
Was just reading about Controlled Shutdown here:
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools
Is this something that can be invoked from code, from within a container
In the 0.8 config, log.dir is now log.dirs. It looks like the singular
log.dir is still supported, but under the covers the property is log.dirs.
I'm curious, does this take a comma separated list of directories? The new
config page just says:
The directories in which the log data is kept
On Wed, Jun 19, 2013 at 10:25 PM, Jason Rosenberg j...@squareup.com
wrote:
In the 0.8 config, log.dir is now log.dirs. It looks like the singular
log.dir is still supported, but under the covers the property is
log.dirs.
I'm curious, does this take a comma separated list
partitions to make load balance evenly (e.g. if you have only one big
partition per server then this isn't going to work).
-Jay
On Wed, Jun 19, 2013 at 11:01 PM, Jason Rosenberg j...@squareup.com
wrote:
is it possible for a partition to have multiple replicas on different
directories on the same
directories
on the same disk to increase its share.
-Jay
On Thu, Jun 20, 2013 at 12:59 PM, Jason Rosenberg j...@squareup.com
wrote:
This sounds like a great idea, to just disks as just a bunch of disks
or
JBOD.hdfs works well this way.
Do all the disks need to be the same size, to use
So, I'm running into the case where after issuing a rolling restart, with
controlled shutdown enabled, the last server restarted ends up without any
partitions that it's the leader of. This is more pronounced of course if I
have only 2 servers in the cluster (during testing). I presume it's kind
(or -1)?
On Mon, Jun 24, 2013 at 1:29 AM, Jason Rosenberg j...@squareup.com wrote:
I have been using async mode with 0.7.2, but I'm wondering if I should
switch to sync mode, so I can use the new request.required.acks mode in a
sensible way.
I am already managing an async queue
file a jira
to track this?
Thanks,
Jun
On Mon, Jun 24, 2013 at 2:50 PM, Jason Rosenberg j...@squareup.com wrote:
Yeah,
I see that with ack=0, the producer will be in a bad state anytime the
leader for it's partition has changed, while the broker that it thinks is
the leader
, does it only then create a new socket?
Jason
On Mon, Jun 24, 2013 at 10:00 PM, Jun Rao jun...@gmail.com wrote:
That should be fine since the old socket in the producer will no longer be
usable after a broker is restarted.
Thanks,
Jun
On Mon, Jun 24, 2013 at 9:50 PM, Jason Rosenberg j
for that topic is
refreshed, even though the ramifications should be that all topics which
have the same leader might need to be refreshed, especially in response to
a connection reset by peer.
Jason
On Mon, Jun 24, 2013 at 10:14 PM, Jason Rosenberg j...@squareup.com wrote:
Jun,
To be clear
, Jun Rao jun...@gmail.com wrote:
I haven't seen this issue before. We do have ~1K topics in one of the Kafka
clusters at LinkedIn.
Thanks,
Jun
On Thu, May 23, 2013 at 11:05 AM, Jason Rosenberg j...@squareup.com
wrote:
Yeah, that's what it looks like to me (looking at the code). So, I'm
Any thoughts on my question, wrt scala version to prefer? Also, what of
the double dependency on zookeeper? Should I file a jira for that?
Jason
On Sun, Jul 14, 2013 at 9:26 PM, Jason Rosenberg j...@squareup.com wrote:
Thanks for doing this!
I'm wondering whether there is a reason
:36 AM, Jason Rosenberg j...@squareup.com wrote:
An update on this. It appears that the phenomenon I'm seeing is that
disk
space is freed on restart, but it's not due files getting deleted on
restart, but instead files are getting truncated on restart. It appears
that log files get pre
Rosenberg j...@squareup.com wrote:
Any thoughts on my question, wrt scala version to prefer? Also, what of
the double dependency on zookeeper? Should I file a jira for that?
Jason
On Sun, Jul 14, 2013 at 9:26 PM, Jason Rosenberg j...@squareup.com
wrote:
Thanks for doing
I'm planning to upgrade a 0.8 cluster from 2 old nodes, to 3 new ones
(better hardware). I'm using a replication factor of 2.
I'm thinking the plan should be to spin up the 3 new nodes, and operate as
a 5 node cluster for a while. Then first remove 1 of the old nodes, and
wait for the
release.
You can also replace a broker with a new server by keeping the same broker
id. When the new server starts up, it will replica data from the leader.
You know the data is fully replicated when both replicas are in ISR.
Thanks,
Jun
On Mon, Jul 22, 2013 at 2:14 AM, Jason Rosenberg j
I have been using a pom file for 0.8.0 that I hand-edited from the one
generated with sbt make:pom. Now that there's a version up on maven
central, I'm trying to use that.
It looks like the pom file hosted now on maven central, is invalid for
maven?
I'm looking at this:
, Jul 24, 2013 at 1:47 AM, Jason Rosenberg j...@squareup.com wrote:
I have been using a pom file for 0.8.0 that I hand-edited from the one
generated with sbt make:pom. Now that there's a version up on maven
central, I'm trying to use that.
It looks like the pom file hosted now on maven central
Joe,
I've verified that the version of the pom in the apache releases repo works
for me (but it still has this issue:
https://issues.apache.org/jira/browse/KAFKA-978).
Thanks,
Jason
On Wed, Jul 24, 2013 at 10:37 AM, Jason Rosenberg j...@squareup.com wrote:
Joe,
Unfortunately, I'm not sure
:32 PM, Jay Kreps jay.kr...@gmail.com wrote:
Interesting. Yes it will respect whatever setting it is given for new
segments created from that point on.
-Jay
On Tue, Jul 16, 2013 at 11:23 AM, Jason Rosenberg j...@squareup.com
wrote:
Ok,
An update on this. It seems we are using XFS
would solve the issue? Is this at all related to the
use of sparse files for the indexes (i.e. RandomAccessFile.setLength(10MB)
when we create the index)? Does this effect other filesystems or just xfs?
-Jay
On Fri, Jul 26, 2013 at 12:42 AM, Jason Rosenberg j...@squareup.com
wrote:
It looks
Jay,
This seems like a great direction. Simplifying the consumer client would
be a big win, and +1 for more native java client integration.
On the last point, regarding memory usage for buffering per partition. I
would think it could be possible to devise a dynamic queuing system, to
allow
One thing is that you need to make sure the consumer starts consuming from
the beginning of the topic, otherwise by default, it will start from the
latest message in the topic, from the time it starts up. Since the
consumer and producer are asynchronous, it's hard to assert that the
consumer is
Yeah, the basics are there, e.g. in the doc for the zookeeper.connect
property:
Specifies the zookeeper connection string in the form
hostname:port/chroot. Here the chroot is a base directory which is
prepended to all path operations (this effectively namespaces all kafka
znodes to allow sharing
docs, when I first came
across them. Hence I always push people to read them if they haven't done
so.
Cheers,
Philip
On Aug 10, 2013, at 12:08 PM, Jason Rosenberg j...@squareup.com wrote:
Yeah, the basics are there, e.g. in the doc for the zookeeper.connect
property:
Specifies
from my iPhone
On Aug 14, 2013, at 1:49 PM, Jason Rosenberg j...@squareup.com wrote:
I'm getting ready to try out this configuration (use multiple disks, no
RAID, per broker). One concern is the procedure for recovering if there
is
a disk failure.
If a disk fails, will the broker go
from my iPhone
On Aug 15, 2013, at 12:52 AM, Jason Rosenberg j...@squareup.com wrote:
Ok, that makes sense that the broker will shut itself down.
If we bring it back up, can this be with an altered set of log.dirs?
Will
the destroyed partitions get rebuilt on a new log.dir? Or do we have
to this, though I would hope it is not possible.
Basically if our write to the fs succeeds and replicas acknowledge then we
send back the ack.
-Jay
On Thu, Aug 15, 2013 at 11:12 AM, Jason Rosenberg j...@squareup.com
wrote:
HmmmI guess I was thinking that a broker could receive a message
I'm using the kafka.javaapi.producer.Producer class from a java client.
I'm wondering if it ever makes sense to refresh a producer by stopping it
and creating a new one, for example in response to a downstream IO error
(e.g. a broker got restarted, or a stale socket, etc.).
Or should it always
Vadim,
We wrap kafka in our own java service container, which as a happy
coincidence, uses yammer metrics also. The yammer library has a
GraphiteReporter, which you can configure, which will run a background
thread and send all configured yammer metrics to graphite at regular
intervals, e.g.
and NotLeaderForPartitionException
are recoverable. MessageSizeTooLargeException may be recoverable with a
smaller batch size.
Thanks,
Jun
On Fri, Aug 23, 2013 at 4:09 PM, Jason Rosenberg j...@squareup.com wrote:
I'm using the kafka.javaapi.producer.Producer class from a java client
than the broker can handle. This may or may not be recoverable since it
depends on the load.
Thanks,
Jun
On Sat, Aug 24, 2013 at 1:44 AM, Jason Rosenberg j...@squareup.com wrote:
Jun,
There are several others I've seen that I would have thought would be
retryable (possibly after
the
final attempt)
Something like this makes sense. Would you mind creating a JIRA for this
so
we can
discuss a solution there ?
Thanks,
Neha
On Sat, Aug 24, 2013 at 10:41 AM, Jason Rosenberg j...@squareup.com
wrote:
Thanks Neha,
On Sat, Aug 24, 2013 at 10:06 AM, Neha
Will this work if we are using a TopicFilter, that can map to multiple
topics. Can I create multiple connectors, and have each use the same Regex
for the TopicFilter? Will each connector share the set of available
topics? Is this safe to do?
Or is it necessary to create mutually
So, it seems that if I want to set a custom serializer class on the
producer (in 0.8), I have to use a class that includes a special
constructor like:
public class MyKafkaEncoderMyType implements EncoderMyType {
// This constructor is expected by the kafka producer, used by reflection
* public
I just encountered the same issue (and I ended up following the same
work-around as Paul).
One thing I noticed too, is that since the broker went down hard with an
IOException when the disk filled up, it also needed 'recover' most of the
logs on disk as part of the startup sequence. So any
Sorry for the crazy long log trace here (feel free to ignore this message
:))
I'm just wondering if there's an easy way to sensibly reduce the amount of
logging that a kafka produer (0.8) will emit if I try to send a message
(with ack level 1), if no broker is currently running?
This is from one
, 2013 at 3:26 PM, Jason Rosenberg j...@squareup.com wrote:
Sorry for the crazy long log trace here (feel free to ignore this message
:))
I'm just wondering if there's an easy way to sensibly reduce the amount
of
logging that a kafka produer (0.8) will emit if I try to send a message
filed: https://issues.apache.org/jira/browse/KAFKA-1066
On Tue, Sep 24, 2013 at 12:04 PM, Neha Narkhede neha.narkh...@gmail.comwrote:
This makes sense. Please file a JIRA where we can discuss a patch.
Thanks,
Neha
On Tue, Sep 24, 2013 at 9:00 AM, Jason Rosenberg j...@squareup.com wrote
1 - 100 of 264 matches
Mail list logo