Hi
We have a 3 Kafka brokers setup on 0.10.2.1. We have a requirement in our
company environment that we have to first stop our 3 Kafka Broker setup,
then do some operations stuff that takes about 1 hours, and then bring up
Kafka (version 1.1) brokers again.
In order to achieve this, we issue:
Hi,
I am running a poll loop for kafka consumer and the app is deployed in
kubernetes.I am using manual commits.Have couple of questions on exception
handling in the poll loop
1) Do i need to handle consumer rebalance scenario(when any of the consumer
pod dies) by adding a listener or will the
I have another very disturbing observation.
The errors go away if I start 2 kafka-producer-perf-test.sh with the same
configs on different hosts.
If I cancel 1 kafka-producer-perf-test.sh then after some time the below
errors start reappearing.
org.apache.kafka.common.errors.TimeoutException:
Do you want to avoid rebalancing in such way that if a consumer exits then
its previously owned partition should be left disowned? But then who will
consume from partition that was deserted by a exiting consumer? In such
case you can go for manual partition assignment. Then there is no question
of
Hi,
I am new to kafka. I am getting less throughput and high latency in publishing
message of size 100-200 bytes.
I have the producer configured with the following configuration. I am using
akka-reactive kafka to publish messages
Configuration:
kafka {
producer {
hi ! Are you using kafka in akka ?
-- Original --
From: "Gnanasoundari Soundarajan";
Date: 2018??5??31??(??) 11:22
To: "users";
Subject: Kafka Producer Query - how to increase throughput of producer
Hi,
I am new to kafka. I am getting less
Hi Meow
I found Apache Artemis is a no blocking MQ and kafka is also a message
service , so they are seem duplicate in some aspects
-- --
??: "meow licous";
: 2018??5??31??(??) 7:57
??: "users";
: Artemis Source
I don't understand how logcompaction works.
I have create an configure a topic and consume from this topic
kafka-topics --create --zookeeper localhost:2181 --replication-factor
1 --partitions 1 --topic COMPACTION10
kafka-topics --alter --zookeeper localhost:2181 --config
As a workaround, you can specify the config just as a string directly:
props.put("default.deserialization.exception.handler", ...)
-Matthias
On 5/31/18 7:48 AM, Guozhang Wang wrote:
> Hello Sumit,
>
> We are going to release 2.0 soon which should contain this fix:
>
You can also pass in a custom partitioner instead of using the default
partitioner.
-Matthias
On 5/31/18 7:39 AM, Hans Jespersen wrote:
> Why don’t to just put the metadata in the header and leave the key null so it
> defaults to round robin?
>
> -hans
>
>> On May 31, 2018, at 6:54 AM, M.
Hello Apache Supporters and Enthusiasts
This is a reminder that our Apache EU Roadshow in Berlin is less than
two weeks away and we need your help to spread the word. Please let your
work colleagues, friends and anyone interested in any attending know
about our Apache EU Roadshow event.
We
Hi,
As this issue is with the Confluent Schema Registry, I'm not sure if this
message is appropriate here, but I'll ask any way :)
We're trying to query the Confluent Schema Registry, which is a external
hosted service we're using, to find all available schemas. The Schema
Registry we are using
Hi,
I am new to kafka. I am getting less throughput and high latency in publishing
message of size 100-200 bytes.
I have the producer configured with the following configuration. I am using
akka-reactive kafka to publish messages
Configuration:
kafka {
producer {
Why don’t to just put the metadata in the header and leave the key null so it
defaults to round robin?
-hans
> On May 31, 2018, at 6:54 AM, M. Manna wrote:
>
> Hello,
>
> I can see the this has been set as "KIP required".
>
> https://issues.apache.org/jira/browse/KAFKA-
>
> I have a
Connect is not only for sources that support reading from a specific point.
But unless the source system is tracking state for you, you connector's
likely to miss information in that source system if the connector stops.
This is probably okay in quite a few systems, so its good if you're fine
with
Hello,
I can see the this has been set as "KIP required".
https://issues.apache.org/jira/browse/KAFKA-
I have a use case where I simply want to use the key as some metadata
information (but not really for any messages), but ideally would like to
round-robin partition assignment. All I
Hi,
I am trying to use the default production exception handler. I am managing
all my dependencies using Maven. Following are the co-ordinates that I am
using :
org.apache.kafka
kafka-streams
1.1.0
org.apache.kafka
kafka-clients
1.1.0
My Problem is that I am not able
Currently authentication logs are not available. In recent Kafka versions,
authorization failures
will be logged in logs/kafka-authorizer.log
On Thu, May 31, 2018 at 5:34 PM, Gérald Quintana
wrote:
> Hello,
>
> I am using SASL Plaintext authentication and ACLs.
> I'd like to be able to detect
Hello,
We are trying to move from single partition to multi-partition approach for
our topics. The purpose is:
1) Each production/testbed server will have a non-Daemon thread (consumer)
running.
2) It will consume messages, commit offset (manual), and determine next
steps if commit fails, app
Hello,
I am using SASL Plaintext authentication and ACLs.
I'd like to be able to detect potential security attacks on Kafka broker
Is it possible to log, on broker side, authentication failures (wrong
password) and authorization failures (not granted)?
I read this blog post
Hi,
We are running a 4 broker cluster with kafka 1.1 (confluent), and we are
currently securing our cluster with SASL_SSL.
Before we introduced SASL, we had no problems taking all brokers down and
up again without any issues with producers or consumers, they
would all resume processing once the
Hi,
As this issue is with the Confluent Schema Registry, I'm not sure if this
message is appropriate here, but I'll ask any way :)
We're trying to query the Confluent Schema Registry, which is a external
hosted service we're using, to find all available schemas. The Schema
Registry we are using
22 matches
Mail list logo