Forgot to mention, I'm using kafka 2.11 version
On 24 March 2016 at 16:55, Ratha v wrote:
> Hi all;
>
> I'm new to kafka and wrote a simple multithreaded kafka consumer. when try
> to consume the messages,It continuously throwing timeoutexception..How can
> i get rid of
Hi all;
I'm new to kafka and wrote a simple multithreaded kafka consumer. when try
to consume the messages,It continuously throwing timeoutexception..How can
i get rid of this?
I have multiple topics.
*Executor*
public class MessageListener {
private Properties properties;
private
I am using java client and kafka 0.8.2, since events are corrupted in
kafka broker i cant read and replay them again.
On Thu, Mar 24, 2016 at 9:42 AM, Becket Qin wrote:
> Hi Sunil,
>
> The messages in Kafka has a CRC stored with each of them. When consumer
> receives a
Hi,
-- I am new to kafka and zookeeper. I have implemented my test environment
with one Zookeeper node and 3 Kafka nodes. Now I want to increase my 1
zookeeper to 3 nodes ensemble.
-- I am continuously producing messages to one of the topic with python
script while the msgs producing in progress
Hi Sunil,
The messages in Kafka has a CRC stored with each of them. When consumer
receives a message, it will compute the CRC from the message bytes and
compare it to the stored CRC. If the computed CRC and stored CRC does not
match, that indicates the message has corrupted. I am not sure in your
can some one help me out here.
On Wed, Mar 23, 2016 at 7:36 PM, sunil kalva wrote:
> Hi
> I am seeing few messages getting corrupted in kafka, It is not happening
> frequently and percentage is also very very less (less than 0.1%).
>
> Basically i am publishing thrift
Thank you, Guozhang! In my exploration, I did overlook the "transform"
method; this looks promising.
I could still use a little more help: I'm confused because for this
sessionization use-case, an invocation of the 'transform' method usually
suggests that a session is still active, so I'll have
More information about the issue:
When the issue happens, the controller is always on the 0.9 version Kafka
broker.
In server.log of other brokers, we can see this kind of error:
[2016-03-23 22:36:02,814] ERROR [ReplicaFetcherThread-0-5], Error for
partition [topic,208] to broker
Hello,
Are there any features planned that would enable an external process to
reset the offsets of a given consumer group? I realize this goes counter to
the design of the system, but it would be convenient if offsets could be
reset as a simple admin command.
The strategies we've investigated
Thanks for the information, Kunal. I will look into it.
On Tue, Mar 15, 2016 at 9:03 PM, Kunal Gupta wrote:
> LinkedIn/burrow tool is there for monitoring consumer
> On 16 Mar 2016 02:28, "Vinay Gulani" wrote:
>
> > Hi,
> >
> > I am new to Kafka
Guozhang,
Thanks for the detailed response, makes sense! Created
https://issues.apache.org/jira/browse/KAFKA-3455
Cheers,
Jon
On Wed, Mar 23, 2016 at 11:35 AM, Guozhang Wang wrote:
> Hello Jon,
>
> Thanks for the feedback. This is definitely something we wanted to support
Is there any way to generate/get "messages out rate" metrics in kafka, if
message size is not fixed.
Thanks,
Vinay
We are evaluating Kafka starting with 0.9.0.1 and among v0.9 new consumer
clients, we can monitor/describe them when they are actually running but
how do you get the last committed offsets of all groups once they are down
or have stopped consuming?
This command only works when "new consumers are
Yes I can reliably reproduce this.
To reproduce it initially start with a single consumer node so that it owns
all the partitions. Then add another consumer node, after rebalancing you
would see that although the ownership of certain partitions are with the
new node the older node which now only
Hello Jon,
Thanks for the feedback. This is definitely something we wanted to support
in the Streams DSL.
One tricky thing, though, is that some operations do not translate to a
single processor, but a sub-graph of processors (think of a stream-stream
join, which is translated to actually 5
Hello Josh,
As of now Kafka Streams does not yet support session windows as in the
Dataflow model, though we do have near term plans to support it.
As for now you can still work around it with the Processor, by calling
"KStream.transform()" function, which can still return you a stream object.
Hi!
I was trying to build a topology using the DSL, add a custom processor and
then add a sink for it. However, it seems like there's no great way to do
this using the APIs, since the processor's internal "name" is not exposed
in the KStream interface, and the .process method doesn't return the
Hi there!
We have noticed that when committing requests are sent intensively, we
receive IllegalGenerationId.
Here is the settings we had problem with: session-timeout: 30 sec,
heartbeat-rate: 3 sec.
Problem resolved by increasing the session timeout to 180 sec.
So I suppose, due to whatever
Hi Anatoly and James,
Can one of you please file a JIRA? Please describe the problem in detail
and please include logs, if available.
Ismael
On Wed, Mar 23, 2016 at 5:44 AM, James Cheng wrote:
> Hi, we ran into this problem too. The only way we were able to bypass this
> was
Hi,
I am trying to run this code for consumer.
import java.io.UnsupportedEncodingException;
import java.nio.ByteBuffer;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;
import kafka.consumer.Consumer;
import kafka.consumer.ConsumerConfig;
import
Hi
I am seeing few messages getting corrupted in kafka, It is not happening
frequently and percentage is also very very less (less than 0.1%).
Basically i am publishing thrift events in byte array format to kafka
topics(with out encoding like base64), and i also see more events than i
publish (i
How long does this last? Maybe there are some async commit-offset frames
left in the send buffer to that node? Can you reliably reproduce this?
On Wed, Mar 23, 2016 at 6:29 AM, Saurabh Daftary
wrote:
> I am running broker 0.9.0 with consumer 0.9.0.1.
>
> Seeing an
Nobody knows ?
-- Brice
On Tue, Mar 22, 2016 at 3:05 PM, Brice Dutheil
wrote:
> I first noticed this (wrong) behavior when I want to connect to some feed
> via kafkacat, hereby represented as bier-bar.
>
> kafkacat -b localhost:9092 -t bier-bar -f '%p:%o -> %s' -C
> %
I am running broker 0.9.0 with consumer 0.9.0.1.
Seeing an issue wherein after a consumer rebalance (when I add a new
consumer) my old consumer still keeps committing offsets to partition it no
longer holds after a rebalance.
The rebalance seem to work ok - On run ConsumerGroupCommand I see that
Hi,
thank you for your reply.
I have different devices in different regions.
When a device is started it opens a websocket connection, where the region and
details about the device are delivered once the connection is opened. After
this, all messages from this device will be sent via this
Hi,
you can have single topic with multiple partitions. It looks like you have
messages with key=region and value=sensor reading, and you want to run some
aggregate, windowing operations by time. Kafka Streams is perfect fit for
this use case.
On Wed, Mar 23, 2016 at 2:30 PM, Maria Musterfrau
Hi,
I asked this question a while back but haven't got an answer, so I'm
repeating it now in the hopes that someone can help.
There was a problem with JMX consumer lag in 0.8 which made lag monitoring
unreliable:
http://search-hadoop.com/m/uyzND14v72215XZpK=Re+Consumer+lag
+lies+orphaned+offsets+
Hi,
Thank you for your reply.
I would like to cluster the values (every message contains one value) after
their region and to add the values up in realtime afterwards (the actual/last
minute).
I would like to use Kafka to subscribe the messages to a stream processing
framework like Storm or
Hi,
1. Based on your design, it can be one or more topics. You can design one
topic per region or
one topic for all region devices.
2. Yes, you need to listen to web socket messages and write to kafka
server using kafka producer.
In your use case, you can also send messages using Kafka
Hi,
you can use librdkafka C library for producing data.
https://github.com/edenhill/librdkafka
Manikumar
On Wed, Mar 23, 2016 at 12:41 PM, Shashidhar Rao wrote:
> Hi,
>
> Can someone help me with reading data from sensors and storing into Kafka.
>
> At the
Hi
I am new and I have a question regarding Kafka. I have devices in different
regions. Every device opens a websocket connection when it gets active and it
sends its messages over the opened websocket connection to a server. My
question is: is every region a topic or is every websocket
Hi,
Can someone help me with reading data from sensors and storing into Kafka.
At the moment the sensors data are read by a C program and the captured
data is stored in Oracle.
How data captured by C program at real time can be stored into Kafka. Is
there a sample C Kafka producer which stores
The super user is indeed for the broker to be able to do all the things it
needs to do. For consumers and producers you can set the correct rights
with the acl tool. http://kafka.apache.org/documentation.html#security_authz
On Tue, Mar 22, 2016 at 8:28 PM christopher palm
Hi All,
I want to publish C++ Kafka client. I have my git repository ready. Now I
want add entry on Kafka “Clients” page (Confluence wiki page) for this new
client.
I did create my login for the Confluence and login with that. But I am not
able to edit the page. Do I require to do some other
34 matches
Mail list logo