Hi,
I am using the python brod library to write to kafka 8.0.
I am on a 2 core server with 4 gigs of ram on ubuntu 12.04
I have zero clue what the below error means. How do I tweak kafka to get it
to work?
[2014-02-06 07:21:57,001] INFO Closing socket connection to
/222.127.xxx.xxx. (kafka.net
Thanks all. I killed ZK and brought up a new server. That resolved the
issue.
Thanks
On Thu, Feb 6, 2014 at 2:06 PM, Jun Rao wrote:
> Could it be that you have some old data in ZK? Could you wipe out all
> existing ZK data or use a new ZK namespace?
>
> Thanks,
>
> Jun
>
>
> On Wed, Feb 5, 2
Could it be that you have some old data in ZK? Could you wipe out all
existing ZK data or use a new ZK namespace?
Thanks,
Jun
On Wed, Feb 5, 2014 at 6:59 PM, David Montgomery
wrote:
> Hi,
>
> How does a kafka of one have issues becoming a leader? I am using kafka
> 8.0. Should I just ignore
I think you mis-spelled the property name. It should be "
advertised.host.name", instead of advertise.host.name
Thanks,
Jun
On Wed, Feb 5, 2014 at 6:06 PM, Balasubramanian Jayaraman (Contingent) <
balasubramanian.jayara...@autodesk.com> wrote:
> Thanks Joel.
>
> It seems the Broker is register
Did you set both the key and the message encoder to DefaultEncoder?
Thanks,
Jun
On Wed, Feb 5, 2014 at 2:46 PM, Tom Amon wrote:
> Hi,
>
> We have a functioning producer that uses as the Producer
> and KeyedMessage signature. We specify the DefaultEncoder in the
> properties. In Java 1.6 it w
Hi,
We have a similar use case. In our case it's about performance metrics and
situations where we plan catch up. Instead of processing data
chronologically, it would make sense to process newest data first, so
performance metrics graphs for "now" have data.
Here is a related thread: http://sea
Hi David,
Could you try to use the latest trunk code base and see if this issue goes
away?
Guozhang
On Wed, Feb 5, 2014 at 6:59 PM, David Montgomery
wrote:
> Hi,
>
> How does a kafka of one have issues becoming a leader? I am using kafka
> 8.0. Should I just ignore this error? How would I r
Hi,
How does a kafka of one have issues becoming a leader? I am using kafka
8.0. Should I just ignore this error? How would I resolve? I just have
one kafka server in dev.
Below is how I start kafka:
/var/lib/kafka-0.8.0-src/bin/kafka-server-start.sh
/var/lib/kafka-0.8.0-src/config/server.pro
Thanks Joel.
It seems the Broker is registered to the zookeeper with the IP 10.199.31.87.
The output pf the command is given below.
[root@ip-10-199-31-87 bin]# ./zookeeper-shell.sh 0.0.0.0:2181 get /brokers/ids/1
Connecting to 0.0.0.0:2181
WATCHER::
WatchedEvent state:SyncConnected type:None p
On Wed, Feb 05, 2014 at 04:51:16PM -0800, Carl Lerche wrote:
> So, I tried enabling debug logging, I also made some tweaks to the
> config (which I probably shouldn't have) and craziness happened.
>
> First, some more context. Besides the very high network traffic, we
> were seeing some other issu
Overall, +1 on sticking with key-values for configs.
> Con: The IDE gives nice auto-completion for pojos.
>
> Con: There are some advantages to javadoc as a documentation mechanism for
> java people.
Optionally, both the above cons can be addressed (to some degree) by
wrapper config POJOs that
So, I tried enabling debug logging, I also made some tweaks to the
config (which I probably shouldn't have) and craziness happened.
First, some more context. Besides the very high network traffic, we
were seeing some other issues that we were not focusing on yet.
* Even though the log retention w
Deleting topic is an on-going JIRA: KAFKA-330, and we are shooting for have
it checked in soon.
On Wed, Feb 5, 2014 at 3:59 PM, David Birdsong wrote:
> On Wed, Feb 5, 2014 at 2:22 PM, Robert Rodgers
> wrote:
>
> > this would be great to add to the operational section of the Kafka
> > documentat
On Wed, Feb 05, 2014 at 03:59:29PM -0800, David Birdsong wrote:
> On Wed, Feb 5, 2014 at 2:22 PM, Robert Rodgers wrote:
>
> > this would be great to add to the operational section of the Kafka
> > documentation.
> >
>
> So is this a way to delete topics? Does this work?
Not really - (I'm assumi
Yes I think so - max fetch size defaults to 1 MB and num-replica
fetchers to one which should be sufficient for most people. Setting
those higher would typically lead to higher memory usage when there
are a large number of topics and with num-fetchers it would multiply
the number of socket connecti
On Wed, Feb 5, 2014 at 2:22 PM, Robert Rodgers wrote:
> this would be great to add to the operational section of the Kafka
> documentation.
>
So is this a way to delete topics? Does this work?
>
> On Feb 5, 2014, at 2:18 PM, Andrew Otto wrote:
>
> >> - Increasing num.replica.fetchers (defaul
And you guys like the existing helper code for that?
-Jay
On Wed, Feb 5, 2014 at 10:17 AM, Neha Narkhede wrote:
> I'm not so sure about the static config names used in the producer, but I'm
> +1 on using the key value approach for configs to ease operability.
>
> Thanks,
> Neha
>
>
> On Wed, Fe
Technically this is possible with the existing server and protocol and
could be implemented using the "low level" network client. The high-level
client doesn't really allow this. This would be a good thing to think about
as we start on the redesign of that client. I don't think it has to be
terribl
Do we have the right default?
-Jay
On Wed, Feb 5, 2014 at 2:04 PM, Joel Koshy wrote:
>
> > topics are all caught up, but I have one high volume topic (around
> > 40K msgs/sec) that is taking much longer. I just took a few samples
> > of Replica-MaxLag to see how long it would take to catch up
Hi,
Here is a use case that we would like to see kafka’s client support in the
future. Currently reading a topic is FIFO. It would be awesome to read a topic
in LIFO order. Put another way we would like to be able to read a topic in
reverse.
Why? Basically we have per user streams which we ha
Hi,
We have a functioning producer that uses as the Producer
and KeyedMessage signature. We specify the DefaultEncoder in the
properties. In Java 1.6 it works fine. However, under Java 1.7 it gives the
following error:
Failed to collate messages by topic, partition due to: [B incompatible with
j
this would be great to add to the operational section of the Kafka
documentation.
On Feb 5, 2014, at 2:18 PM, Andrew Otto wrote:
>> - Increasing num.replica.fetchers (defaults is one)
> Awesome! I just tried this one, bumped it up to 8 (12 cores on this broker
> box). It is now catching up a
> - Increasing num.replica.fetchers (defaults is one)
Awesome! I just tried this one, bumped it up to 8 (12 cores on this broker
box). It is now catching up at around 17K msgs/sec, which will mean it will
finish in about 4 or 5 hours. I’ll check up on it again tomorrow.
That should do it, Th
> topics are all caught up, but I have one high volume topic (around
> 40K msgs/sec) that is taking much longer. I just took a few samples
> of Replica-MaxLag to see how long it would take to catch up.
> Currently, it is behind about 12.5 million messages and is catching
> up at a rate of about 1
Use zookeeper-shell script:
./bin/zookeeper-shell.sh : get /brokers/ids/
On Wed, Feb 05, 2014 at 07:04:50AM +, Balasubramanian Jayaraman
(Contingent) wrote:
> Where should I look for these information. From the logs, I could see
> ZooKeeper is bound to port 2181 and IP 0.0.0.0. The Kafka Se
Can you enable DEBUG logging in log4j and see what requests are coming in?
-Jay
On Tue, Feb 4, 2014 at 9:51 PM, Carl Lerche wrote:
> Hi Jay,
>
> I do not believe that I have changed the replica.fetch.wait.max.ms
> setting. Here I have included the kafka config as well as a snapshot
> of jnetto
I'm not really an ops person either. I was using jnettop for this.
On Wednesday, February 5, 2014, S Ahmed wrote:
> Sorry I'm not a ops person, but what tools do you use to monitor traffic
> between servers?
>
>
> On Tue, Feb 4, 2014 at 11:46 PM, Carl Lerche
> >
> wrote:
>
> > Hello,
> >
> > I'
It might. I considered this but ended up going this way. Now that we have
changed partitionKey=>partition it almost works. The difference is the
consumer gets an offset too which the producer doesn't have.
One thing I think this points to is the value of getting the consumer java
api worked out ev
Currently, the user will send ProducerRecords using the new producer. The
expectation will be that you get the same thing as output from the
consumer. Since ProduceRecord is a holder for topic, partition, key and
value, does it make sense to rename it to just Record? So, the send/receive
APIs would
I'm not so sure about the static config names used in the producer, but I'm
+1 on using the key value approach for configs to ease operability.
Thanks,
Neha
On Wed, Feb 5, 2014 at 10:10 AM, Guozhang Wang wrote:
> +1 for the key-value approach.
>
> Guozhang
>
>
> On Tue, Feb 4, 2014 at 9:34 AM,
+1 for the key-value approach.
Guozhang
On Tue, Feb 4, 2014 at 9:34 AM, Jay Kreps wrote:
> We touched on this a bit in previous discussions, but I wanted to draw out
> the approach to config specifically as an item of discussion.
>
> The new producer and consumer use a similar key-value config
Sorry I'm not a ops person, but what tools do you use to monitor traffic
between servers?
On Tue, Feb 4, 2014 at 11:46 PM, Carl Lerche wrote:
> Hello,
>
> I'm running a 0.8.0 Kafka cluster of 3 servers. The service that it is
> for is not in full production yet, so the data written to cluster i
Hi all!
I recently had a problem where one out of two of my brokers would not reboot
due to a hardware failure. The broker was down for almost a week before the
required part came in and was fixed by our datacenter tech. During that time,
the live broker was able to handle all messages for al
33 matches
Mail list logo