Very nice!
On Wed, Jun 15, 2016 at 6:40 PM, John Dennison wrote:
> My team has published a post comparing python kafka clients. Might be of
> interest to python users.
>
> http://activisiongamescience.github.io/2016/06/15/Kafka-Client-Benchmarking/
My team has published a post comparing python kafka clients. Might be of
interest to python users.
http://activisiongamescience.github.io/2016/06/15/Kafka-Client-Benchmarking/
> On 15 Jun 2016, at 21:56, Subhash Agrawal wrote:
>
> [2016-06-15 13:39:39,808] DEBUG [ZkClient-EventThread-24-localhost:2181]
> [Channel manager on controller 0]: Controller 0 trying to connect to broker 0
> (kafka.controller.ControllerChannelManager)
The controller
Any luck trying to figure out this problem?
On Wed, May 18, 2016 at 10:53 AM, Samuel Chase wrote:
> Hello Ismael,
>
> On Wed, May 18, 2016 at 5:54 PM, Ismael Juma wrote:
> > Your second example should work as well. Can you please include the code
> >
Hi All,
I am embedding Kafka 0.10.0 and corresponding zookeeper in java process. In
this process, I start zookeeper first and then wait for 10 seconds and
then start kafka. These are all running in the same process. Toward the end of
kafka startup, I see following exception. It seems zookeeper
Igor,
This article talks about what to think about if putting large messages into
Kafka: http://ingest.tips/2015/01/21/handling-large-messages-kafka/
The summary is that Kafka is not optimized for handling large messages, but if
you really want to, it's possible to do it.
That website is
Prateek, hope you looked at compression?
On Thu, Jun 2, 2016 at 10:26 AM, Tom Crayford wrote:
> The article says ideal is about 10KB, which holds up well with what we've
> seen in practice as well.
>
> On Thu, Jun 2, 2016 at 6:25 PM, prateek arora
I believe this issue is similar to the one reported here:
https://issues.apache.org/jira/browse/KAFKA-3129.
--Vahid
From: Dean Arnold
To: users@kafka.apache.org
Date: 06/15/2016 11:27 AM
Subject:Re: ConsoleProducer missing messages (random behavior)
Hi Adrienne,
How do you enter the input on t1 topic? If using kafka-console-producer, you'll
need to pass in keys as well as values. Here is an example:
http://www.shayne.me/blog/2015/2015-06-25-everything-about-kafka-part-2/
btw, it appears the missing msgs are at the end of the CSV file, so maybe
the producer doesn't properly flush when it gets EOF on stdin ?
On Wed, Jun 15, 2016 at 11:21 AM, Dean Arnold wrote:
> I'm seeing similar issues with 0.9.0.1.
>
> I'm feeding CSV records (65536
I'm seeing similar issues with 0.9.0.1.
I'm feeding CSV records (65536 total, 1 record per msg) to the console
producer, which are consumed via a sink connector (using connect-standalone
and a single partition). The sink occasionally reports flushing less than
65536 msgs via the sink flush().
Hi,
I was following the Quickstart guide and I have noticed that
ConsoleProducer does not publish all messages (the number of messages
published differs from one run to another) and happens mostly on a fresh
started broker.
version: kafka_2.11-0.10.0.0
OS: Linux (Ubuntu 14.04, Centos 7.2)
JDK:
Hi community,
Probably it is very basic question, as I am new to Kafka Streams.
I am trying to initialize KTable or KStream from kafka topic. However, I
don't know how to avoid getting null keys. So,
KTable source =
builder.stream(Serdes.String(),Serdes.String(),"t1");
Apologies. Sent to a wrong mailing group.
On Wed, Jun 15, 2016 at 7:48 PM, VG wrote:
> I have a very simple driver which loads a textFile and filters a
> sub-string from each line in the textfile.
> When the collect action is executed , I am getting an exception. (The
>
I have a very simple driver which loads a textFile and filters a sub-string
from each line in the textfile.
When the collect action is executed , I am getting an exception. (The
file is only 90 MB - so I am confused what is going on..) I am running on a
local standalone cluster
16/06/15
Thanks Ewan,
The second request was made by me directly. I'm trying to add this
functionality into my .Net application. The library I'm using doesn't have an
implementation of the AvroSeriazlizer that interacts with the schema registry.
I've now added in code to make to POST to
Hi,
I've looked at this issue already at the Flink list and recommended Hironori
to post here. It seems that the consumer is not returning from the poll()
call, that's why the commitOffsets() method can not enter the synchronized
block.
The KafkaConsumer is logging the following statements:
Hello,
I am running stream processing job with Kafka and Flink.
Flink reads records from Kafka.
My software versions are:
- Kafka broker: 0.9.0.2.4 (HDP 2.4.0.0 version)
- Kafka client library: 0.9.0.1
- Flink: 1.0.3
Now I have problem that Flink job is sometimes blocked and consumer lag
is
Increasing reconnect.backoff.ms=1000 ms and BLOCK_ON_BUFFER_FULL_CONFIG to
true did not help either. The messages are simply lost.
Upset to find that there is no way to handle messages that are lost when
broker itself is not available and retries are not part of broker
connection issues.
Hello Guys.
We are going to install Apache Kafka in our local data center and different
producers which are distributed across different locations will be connected
to this server.
Our Producers will use Internet connection and also will send 10mg data
packages every 30 seconds.
I was
20 matches
Mail list logo