No, why would you want to store the offsets in zookeeper? One of the
improvements is to not depend on zookeeper for the offsets. And there is
tooling to get the offsets (although the consumer group must exist).
On Mon, Jun 20, 2016 at 10:57 PM Bryan Baugher wrote:
> Hi everyone,
>
> With the new
Hello Unmesh,
Timestamp extractor is always applied at the beginning of the topology for
each incoming record, and the extracted timestamp is carried throughout the
topology.
Could you share your stack trace upon failure with your source code? And
what version of Kafka Streams are you using? Some
You should try opening an issue in the github repo of this plugin. Also try
writing in the LogStash forum
On יום א׳, 19 ביוני 2016 at 20:39 Fahimeh Ashrafy
wrote:
> Hello all
>
> I use kafka input and kafka output plugin in logstash. I have high cpu
> usage, what can I do to get it better?
> log
I started getting the following errors on all topics:
root@kafka-1-bw3ys:/opt/kafka/bin# ./kafka-console-consumer.sh --topic
systemconfig --from-beginning --zookeeper $ZOOKEEPER_URL
[2016-06-20 23:43:27,460] WARN Fetching topic metadata with correlation id
0 for topics [Set(systemconfig)] from b
Hi All,
I am seeing error while sending message via test producer on SSL port. I am
able to successfully send message to non-SSL port.
Here is my broker configuration.
listeners=SSL://:9093
security.inter.broker.protocol=SSL
ssl.client.auth=required
ssl.keystore.location=C:/test.jks
ssl.keystore
Hi everyone,
With the new Kafka consumer[1] is it possible to use zookeeper based offset
storage?
Bryan
[1] -
http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
By default the flush to disk in Kafka is turned off. The reason for this is
to rely on the replicas instead for the data. But the OS still flushes the
data to disk periodically when its buffer is full. The problem with this is
that - if you force kill your Kafka process or the VM restarts - Kafka w
Kafka periodically flushes to disk, but this can happen after your producer
gets an acknowledgement. If you have a replicated topic then the producer
gets an acknowledgment only after other replicas have received the message. In
this way you get safety from server crashes by replicating to ot
Hello everyone,
Does Kafka guarantee that the data flushed by the server is absolutely
persisted into disk? Is it possible that the data that only accumulate in
the page cache without sync to disk, get lost if the server crashes?
Thanks,
Xin
Thank you, but that is not the question I asked. If a topic exists and a
consumer is listening -- and then the topic is deleted -- does the
consumer get some sort of notification that the topic is now gone, perhaps
an exception?
Chris
From: Anirudh P
To: users@kafka.apache.org
Date:
Hi,
I was trying to experiment with Kafka streams, and had following code
KTable, Integer> aggregated = locationViews
.map((key, value) -> {
GenericRecord parsedRecord = parse(value);
String parsedKey = parsedRecord.get("region").toString() +
parsedRecord.get("loca
Correction, leader was an Option, and I guess Optional cannot be used yet
because Java language level compatibility is too low. What a mess, Java and
Scala in codebase, and even new code being written in old Java.
On Mon, Jun 20, 2016 at 12:08 PM, Stevo Slavić wrote:
> Hello Apache Kafka communi
Hello Apache Kafka community,
Scala API had hasLeader method in PartitionMetadata of MetadatResponse. I
see new Java API does not have such method and there's no javadoc on the
class. Can somebody please explain - will leader reference be null or
something when partition has no leader? Why not enc
Hello All,
I have data coming from sensors into kafka cluster in text format delimited
by comma.
How to offload this data to Hive periodically from Kafka. I guess, Kafka
Connect should solve my problem but when I checked documentation, examples
have only avro formatted data. Can you please provid
I am trying to run kafka-console-producer.sh script on a kerberized HDP 2.4
sandbox with Kafka 0.9. I have created a topic and provided user permissions to
the topic.
Below is the command I am using to start the producer:
./kafka-console-producer.sh --topic test-topic --broker-li
Also in the tests , the produce operation starts first and after 400 seconds,
the consume requests start in parallel. The number of parallel requests is
driven by the number of partitions. From the logs, we see that the very first
consumer that gets created to serve a read request takes around
16 matches
Mail list logo