So the kafka performance tools seem to indicate that the problem is not in
the broker, but rather somewhere in librdkafka/OpenSSL. I'm not completely
sure I got the configs right to try and eliminate any batching
considerations in the latency calculation (it seems like encrypting /
decrypting a
Hi guys,
I’m using kafka 0.9.0.1 and Java client. I saw the following exceptions throw
by my consumer:
Caused by: java.lang.IllegalStateException: Correlation id for response
(767587) does not match request (767585)
at
Becket/Jason,
So, it turns out the server where saw the recurring FD issue was not
patched correctly, which is why we saw the deadlock again. We caught that,
and after testing over the last few days, feel pretty confident, I'd say
99% sure, that the patch in KAFKA-3994 does fix the problem for
Thank you both, Hans and Rajini.
I will try out all the methods you suggested and report back.
As an aside my investigation into the known, slow software implementation
of the GCM class of cipher algorithms in java 8 was a bust. I tried all of
the default cipher suites common to OpenSSL (on the
For future reference, new releases typically have an "upgrade" section that
talks about compatibility.
e.g. http://kafka.apache.org/0101/documentation.html#upgrade
On Fri, Nov 18, 2016 at 2:49 PM, Zakee wrote:
> No. Newer client API won’t work with older version of broker.
No. Newer client API won’t work with older version of broker. Generally, older
client should be able to work with newer broker version.
-Zakee
> On Nov 18, 2016, at 11:58 AM, Weian Deng wrote:
>
> More specifically, Is Kafka Java client 0.9.0.1 compatible with Kafka
Mark,
Thanks for reporting this. First, a clarification. The HW is actually never
advanced until all in-sync followers have fetched the corresponding
message. For example, in step 2, if all follower replicas issue a fetch
request at offset 10, it serves as an indication that all replicas have
Thanks Matthias/Michael/Guozhang!
Using app id may help to some extent. Will have to think & test this
through.
Good to know there will be more direct support for this in the future. May
be it will play well with KIP-37.
Srikanth
On Fri, Nov 18, 2016 at 1:12 PM, Guozhang Wang
Thanks again Rajini!
One last followup question, if you don't mind. You said that my
server.properties file should look something like this:
listeners=SSL://:9093
advertised.listeners=SSL://mybalancer01.example.com:9093
security.inter.broker.protocol=SSL
However, please remember that I'm
This is because the producer rely on its metadata refresh to find the new
leader of the partitions when the old leader fails. So if you are lucky
that the next refresh happens immediately after the old leader is bounced
the producer will send immediately, and if you are unlucky that the
previous
Srikanth,
Are you checking to see if you can manually set the internal topic names to
follow your own naming convention in your shared cluster? For that the
current answer is no, as Streams are trying to abstract users from worrying
about them since they are treated as "internals" anyways. But I
Hello Sachin,
Which version of Kafka are you using for this application?
Guozhang
On Tue, Nov 15, 2016 at 9:52 AM, Sachin Mittal wrote:
> Hi,
> I have a simple pipeline
> stream.aggregateByKey(new Initializer() {
> public List apply() {
> return new List
> }
Hello Ara,
I would love to learn if your rebalance issue mentioned in KAFKA-4392 still
exists, since I think what you observed maybe a combination of various
issues as we discussed offline, which could be related to KIP-62 as well.
Let me know and I'd love to help investigate further.
Guozhang
Hi Ryan,
Perhaps you could share some of your code so we can have a look? One thing I'd
check is if you are using compacted Kafka topics. If so, and if you have
non-unique keys, compaction happens automatically and you might only see the
latest value for a key.
Thanks
Eno
> On 18 Nov 2016, at
Hi John,
Perhaps you can tell us a bit more on what kind of fault tolerance you are
looking for. I ask because Kafka Streams is fault tolerant and highly available
by default and perhaps you don't need to do anything extra for your application.
Thanks
Eno
> On 18 Nov 2016, at 15:21, John
Good day.
My name is Valeriy, and I have a problem with my Kafka Consumer.
Detail my problem is described here:
http://stackoverflow.com/questions/40651260/apache-kafka-consumer-stop-consuming-messages
Briefly, I am sure that the messages continue to arrive in the topic in the
amount of 100 per
Hi!
I'm wondering if anyone of the Kafka users has any experience with collecting
VMWare vCenter metrics with it. If yes, is there any example described on the
web?
Cheers,
Krystian
===
Please access the attached
Eno, thanks!
In my case I think I need more functionality for fault tolerance, so will look
at KafkaConsumer.
Thanks,
John
-Original Message-
From: Eno Thereska [mailto:eno.there...@gmail.com]
Sent: Friday, November 18, 2016 9:02 AM
To: users@kafka.apache.org
Subject: Re: Change
You should set advertised.listeners rather than the older
advertised.host.name property in server.properties:
- listeners=SSL://:9093
- advertised.listeners=SSL://mybalancer01.example.com:9093
- security.inter.broker.protocol=SSL
If your listeners are on particular interfaces, you can
Hi All,
How to recover the broker from the following errors:
ERROR [ReplicaFetcherThread-0-28], Error for partition
[_confluent-controlcenter-0-MonitoringStream-ONE_HOUR-changelog,4] to
broker 28:org.apache.kafka.common.errors.UnknownTopicOrPartitionException:
This server does not host this
Hi
I'm trialling Kafka Streaming for a large stream processing job, however
I'm seeing message loss even in the simplest scenarios.
I've tried to boil it down to the simplest scenario where I see loss which
is the following:
1. Ingest messages from an input stream (String, String)
2. Decode
Hi All,
Is it possible to connect producers/consumers over a plaintext protocol
with the following broker configuration ?
broker config:
plaintext:host-name:9091,sasl_plaintext:host-name:9092
*allow.everyone.if.no.acl.found=true*
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
Thanks Rajini,
So currently one of our Kafka nodes is 'mykafka01.example.com', and in its
server.properties file, I have advertised.host.name=mykafka01.example.com. Our
load balancer lives at mybalancer01.example.com, and this what producers will
connect to (over SSL) to send messages to
You can use the tools shipped with Kafka to measure latency.
For latency at low load, run:
- bin/kafka-run-class.sh kafka.tools.EndToEndLatency
You may also find it useful to run producer performance test at different
throughputs. The tool prints out latency as well:
-
Zac,
Kafka has its own built-in load-balancing mechanism based on partition
assignment. Requests are processed by partition leaders, distributing load
across the brokers in the cluster. If you want to put a proxy like HAProxy
with SSL termination in front of your brokers for added security, you
Srikanth,
as Matthias said, you can achieve some namespacing effects through the use
of (your own in-house) conventions of defining `application.id` across
teams. The id is used as the prefix for topics, see
Correct, we've disabled unclean leader election. There were also no log
messages from an unclean election. I believe that Kafka thinks it
performed a clean election and still lost data.
--
Mark Smith
m...@qq.is
On Thu, Nov 17, 2016, at 06:23 PM, Tauzell, Dave wrote:
> Do you have:
>
>
27 matches
Mail list logo