Flume with netcat source and Kafka Channel or Kafka Sink will do that.
A bit more complex than a Kafkacat equivalent, but will get the job done.
On Tue, May 19, 2015 at 3:02 AM, clay teahouse clayteaho...@gmail.com
wrote:
Hi All,
Does anyone know of an implementation of kafkacat that reads
Hi Clay,
not really sure what you mean by socket, but if you want something
listening on a network port and forwards/produces all data to Kafka then
you might want to look at n2kafka: https://github.com/redBorder/n2kafka
Another alternative would be to use inetd, socat, or similar to pipe a
Thanks Magnus. I'll take a look at n2kafka. I have many data sources
sending data to kafka and I don't want to spawn lots of kafkacat processes.
On Tue, May 19, 2015 at 2:40 AM, Magnus Edenhill mag...@edenhill.se wrote:
Hi Clay,
not really sure what you mean by socket, but if you want
Thanks for the input. I've tried flume but the performance is not nearly as
good as kafkacat.
On Tue, May 19, 2015 at 2:40 AM, Magnus Edenhill mag...@edenhill.se wrote:
Hi Clay,
not really sure what you mean by socket, but if you want something
listening on a network port and
Hi,I'mtrying to use low-level Consumer JavaAPI to manage offsets manually,
with the latest kafka_2.10-0.8.2.1To verify that theoffsets I commit/read form
Kafka are correct, I use thekafka.tools.ConsumerOffsetChecker tool.Here is an
exampleof the output for a topic/consumer group
Sorry, the formatting seems to be all screwed up... I'll try to make it all
plain text:
Hi,
I'm trying to use low-level Consumer Java API to manage offsets manually, with
the latest kafka_2.10-0.8.2.1
To verify that the offsets I commit/read form Kafka are correct, I use the
Hi,
I am testing Kafka-0.8.2.1 new producer API. For synchronous sending, I am
calling future.get() just after producer send.
I killed my broker and started Produce, noticed that it is throwing
ExecutionException but after that It is still trying to re-connect to
broker and this is keep on
Try out bruce
https://github.com/ifwe/bruce it's a daemon listening socket producer, does
exactly what you are looking for I think.
~ Joestein
On May 19, 2015 7:05 AM, clay teahouse clayteaho...@gmail.com wrote:
Thanks Magnus. I'll take a look at n2kafka. I have many data sources
sending data
Hello there,
We ran into a situation on our dev KAFKA cluster (3 nodes, v0.8.2) where we ran
out of disk space on one of the nodes. To free up disk space, we reduced
log.retention.hours to something more manageable (from 72hrs to 52hrs) as well
as we moved the log directory to disk of 200GB.
Hi All
Has anyone tried this? We have two data centers A and B. We would like data
replicated between A and B. So I would like to have a kafka cluster set up
in A and B. When we need to replicate from A--B I would like the app in A
publish a topic to the kafla cluster in data center A. The
Good day Kafka-users.
To support our transition to Kafka as the central hub for data in our Big Data
Platform, we created a new producer named Klogger
(https://github.com/blackberry/klogger). It's a stripped down, high
performance producer that can take a TCP port or file as an input, and
I came across this google group conversation that suggests KafkaConsumer will
not be complete until the next release.
(https://groups.google.com/forum/#!msg/kafka-clients/4VLb-_wI22c/imYRlxogo-kJ)
```
org.apache.kafka.clients.consumer.KafkaConsumerString, String consumer = new
The new consumer in trunk is functional when used similarly to the old
SimpleConsumer, but none of the functionality corresponding to the high
level consumer is there yet (broker-based coordination for consumer
groups). There's not a specific timeline for the next release (i.e. when
it's ready).
The links below shows the code is definitely in trunk.
Does anyone know when the source in trunk might be released?
Thanks!
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache
/kafka/clients/consumer/KafkaConsumer.java#L634
Thanks!
On 5/19/15, 3:12 PM, Ewen Cheslack-Postava e...@confluent.io wrote:
The new consumer in trunk is functional when used similarly to the old
SimpleConsumer, but none of the functionality corresponding to the high
level consumer is there yet (broker-based coordination for consumer
groups).
Hi Mayank,
The client should expose a configuration property to enable TCP keepalives
(SO_KEEPALIVE) on its broker sockets,
SO_KEEPALIVE provides speedier detection of connection loss on idle
connections.
(as a positive side effect it also helps keeping connections alive through
Thanks Magnus.
In this case the connections are not idle. There is active traffic between
the producer/client and the kafka node when the node goes down.
There are socket timeouts arguments for SimpleConsumer. But there are none
when creating the producer. If there a configuration/poroperty item
Hi Bill,
I don't know if this is exactly the same case (last part when they get the
topic tehy apply locally is bit unclear), but we have setup with Kafka in
DC A and consumers both in DC A and DC B. Actually we also have producers
in A and B writing to Kafka in A, but we are trying to change
I am using kafka 0.8.2.1 old producer. When one of the kafka node in the
remote cluster is down the producer is waiting about 15 minutes before it
disconnects and tries to connect to another node. (kafka takes 1 min to
change leaders).
Producer config used:
request.required.acks=1
19 matches
Mail list logo