Hi,
I have a remote server (EC2) setup with Kafka cluster setup. There are 3
brokers each running in the port 9092,9093,9094. The zookeeper is running in
the port 2181.
When I send message to the brokers from my PC, I get an exception which is
given below. I did a dump in the remote server,
Hi,
This I guess is not just question for Kafka, but for all event driven
systems, but since most of people here deal with events, I would like to
hear some basic suggestions for modelling event messages, or even better,
pointer to some relevant literature/website that deals with this stuff?
Kafka Devs,
Just wondering if there'll be anything in the line of Kafka presentations
and/or tutorials at ApacheCon in Denver in April?
I was going to submit a talk on Kafka and Mesos. I still am trying to nail
down the dates in my schedule though. Anyone else going? Maybe we could do a
meetup or bof or something?
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source
Are you using mesos?
On Jan 27, 2014, at 8:39, Joe Stein joe.st...@stealth.ly wrote:
I was going to submit a talk on Kafka and Mesos. I still am trying to nail
down the dates in my schedule though. Anyone else going? Maybe we could do a
meetup or bof or something?
problem like this;
[2014-01-27 14:03:03,962] ERROR Closing socket for /192.168.86.71 because of
error (kafka.network.Processor)
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcher.read0(Native Method)
at
Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-OnEC2,whycan'tmyhigh-levelconsumersconnecttothebrokers
?
thanks,
Jun
On Mon, Jan 27, 2014 at 12:17 AM, Balasubramanian Jayaraman (Contingent)
balasubramanian.jayara...@autodesk.com wrote:
Hi,
I have a remote
Yes
/***
Joe Stein
Founder, Principal Consultant
Big Data Open Source Security LLC
http://www.stealth.ly
Twitter: @allthingshadoop
/
On Jan 27, 2014, at 11:57 AM, Steve Morin steve.mo...@gmail.com wrote:
This could be normal. It just means that the client closed the socket,
potentially during consumer rebalancing.
Thanks,
Jun
On Sun, Jan 26, 2014 at 10:33 PM, 我 liguoqing19861...@163.com wrote:
problem like this;
[2014-01-27 14:03:03,962] ERROR Closing socket for /192.168.86.71 because
@Joe, sounds interesting. Will keep an eye out for the session
On Mon, Jan 27, 2014 at 4:57 PM, Steve Morin steve.mo...@gmail.com wrote:
Are you using mesos?
On Jan 27, 2014, at 8:39, Joe Stein joe.st...@stealth.ly wrote:
I was going to submit a talk on Kafka and Mesos. I still am
Jay -
Config - your explanation makes sense. I'm just so accustomed to having
Jackson automatically map my configuration objects to POJOs that I've
stopped using property files. They are lingua franca. The only thought
might be to separate the config interface from the implementation to allow
for
Clark,
Yeah good point. Okay I'm sold on Closable. Autoclosable would be much
better, but for now we are retaining 1.6 compatibility and I suspect the
use case of temporarily creating a producer would actually be a more rare
case.
-Jay
On Mon, Jan 27, 2014 at 9:29 AM, Clark Breyman
Oh, I should note that the built in consumer offsetchecker tool works for me
but I was hoping to use something like jmxtrans so that I could easily export
the data to ganglia, graphite or some other graphing tool. Jmxtrans was
recommended from the Kafka wiki and there's another project called
Thanks for your response Joel.
I am currently trying out JMXTrans to get the stats from MBean and I can read
attributes fine but it doesn't support JMX Operations yet. What tool do you use
for your reporting? Is there another tool that supports JMX operations so that
I can use the getOffsetLag
re: Using package to avoid ambiguity - Unlike Scala, this is really
cumbersome in Java as it doesn't support package imports or import aliases,
so the only way to distinguish is to use the fully qualified path.
re: Closable - it can throw IOException but is not required to. Same with
Hi Xuyen,
SPM for Kafka gets all this stuff already. I didn't look into whether/how
exactly one can spot consumer lag, but if you spot a way to do it, please
share.
There is a demo of SPM for Kafka if you go to
https://apps.sematext.com/demoso you can see all Kafka performance
graphs and see if
AutoCloseable would be nice for us as most of our code is using Java 7 at
this point.
I like Dropwizard's configuration mapping to POJOs via Jackson, but if you
wanted to stick with property maps I don't care enough to object.
If the producer only dealt with bytes, is there a way we could still
I am using storm and kafka for replaying messages.
Now I want to save offset of each message and then use it later for
resending the message.
So my question is how can I fetch a single message using its offset ?
That is I know the offset of a message and I want to use the offset to
fetch that
I have the same need, and I've just created a Jira:
https://issues.apache.org/jira/browse/KAFKA-1231
The reasoning behind it is because our topics are created on a per product
basis and each of them usually starts big during the initial weeks and
gradually reduces in time (1-2 years).
thanks
Rather than fetching the message again you could cache it in the spout and
emit it again if the *fail* method is called and delete it when the
*ack*method is called. This is possible as Storm guarantees to call
the
*fail* and *ack* methods with the *messageId* on the exact same spout that
the
In Kafka you cannot fetch just one message based on the offset, what you
get instead is to start fetching from the given offset. Of course you can
just get the first one that you want and then discard the rest and stop
fetching immediately, but I think a better idea would be cache the
Hey All, I've been cobbling together a high-level consumer for golang
building on top of Shopify's Sarama package and wanted to run the basic
design by the list and get some feedback or pointers on things I've missed
or will eventually encounter on my own.
I'm using zookeeper to coordinate
Hello David,
One thing about using ZK locks to own a partition is load balancing. If
you are unlucky some consumer may get all the locks and some may get none,
hence have no partitions to consume.
Also you may need some synchronization between the consumer thread with the
offset thread. For
Siyuan, Marc:
We are currently working on topic-deletion supports
(KAFKA-330https://issues.apache.org/jira/browse/KAFKA-330),
would first-delete-then-recreate-with-fewer-partitions work for your cases?
The reason why we are trying to avoid shrinking partition is that it would
make the logic very
On Mon, Jan 27, 2014 at 4:19 PM, Guozhang Wang wangg...@gmail.com wrote:
Hello David,
One thing about using ZK locks to own a partition is load balancing. If
you are unlucky some consumer may get all the locks and some may get none,
hence have no partitions to consume.
I've considered this
thanks for the suggestions.
The problem with caching is that I have to cache a lot of messages as I
don't know which one is going to fail.
If a message is processed at one go caching that message is unnecessary
that's why I want to replay it from the kafka itself.
And I want to use the offset as
26 matches
Mail list logo