Hi, Harsha,
I appreciate you very much for your response and the bash script you
provided to generate the keystores works for me and solve the problem. I
was wondering it was caused by the cipher suite differences between openjdk
and oracle-jdk, anyway it is not that case. Finally I got it worked
Flume could be an option with an Interceptor although the throughput could
be less compared to Mirror Maker with compression and shallow iterator
enabled.
On Tue, Aug 25, 2015 at 10:28 PM tao xiao xiaotao...@gmail.com wrote:
In the trunk code mirror maker provides the ability to filter out
Hi,
I have configured 3 brokers and 3 zookeepers in different unix boxes. All
are receiving message successfully.
1. Now leader broker got crashed and when I try to publish 10 message. some
of them are getting failed with
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata
Hi,
Our application receives events through a HAProxy server on HTTPs, which should
be forwarded and stored to Kafka cluster.
What should be the best option for this ?
This layer should receive events from HAProxy produce them to Kafka cluster,
in a reliable and efficient way (and should
Hi Damian
Just clarifying - you’re saying you currently have Kafka 0.7.x running with a
dedicated broker addresses (bypassing ZK) and hitting a VIP which you use for
load balancing writes. Is that correct?
Are you worried about something specific in the 0.8.x way of doing things (ZK
under
Hi Ben,
Yes we have a VIP fronting our kafka 0.7 clusters. The producers connect
with a broker list that just points to the VIP.
In 0.8, with the new producer at least, the producer doesn't go through
Zookeeper - which is good. However, all producers will connect to all
brokers to send messages.
Just a little feedback on our issue(s) as FYI to whoever is interested.
It basically all boiled down to the configuration of topics. We noticed
while performance testing (or trying to ;) ) that the partitioning was
most critical to us.
We originally followed the linkedin recommendation and
That’s a fair few connections indeed!
You may be able to take the route of pinning producers to specific partitions.
KIP-22
https://cwiki.apache.org/confluence/display/KAFKA/KIP-22+-+Expose+a+Partitioner+interface+in+the+new+producer
made this easier by exposing a partitioner interface to
Thanks for updates Jörg. It's very useful.
Thanks,
Raja.
On Wed, Aug 26, 2015 at 8:58 AM, Jörg Wagner joerg.wagn...@1und1.de wrote:
Just a little feedback on our issue(s) as FYI to whoever is interested.
It basically all boiled down to the configuration of topics. We noticed
while
I'm actually also really interested in this...I had a chat about this on
the distributed systems slack's http://dist-sys.slack.com Kafka channel a
few days ago, but we're not much further than griping about the problem.
We're basically migrating an existing event system, one which packed
messages
Marc,
Thanks for your response. Let's have more details on the problem.
As I already mentioned in the previous post, here is our expected data flow:
logs - HAProxy - {new layer } - Kafka Cluster
The 'new layer' should receive logs as HTTP requests from HAproxy and produce
the same logs to
Apologies if this is somewhat redundant, I'm quite new to both Kafka and the
Confluent Platform. Ewen, when you say Under the hood, the new producer will
automatically batch requests.
Do you mean that this is a current or planned behavior of the REST proxy? Are
there any durability
Ewen,
Thanks for the valuable information.
I will surely try this and come up with my comments.
Thanks again
Hemanth
-Original Message-
From: Ewen Cheslack-Postava [mailto:e...@confluent.io]
Sent: Thursday, August 27, 2015 10:21 AM
To: users@kafka.apache.org
Subject: Re: Http Kafka
Hemanth,
Can you be a bit more specific about your setup? Do you have control over
the format of the request bodies that reach HAProxy or not? If you do,
Confluent's REST proxy should work fine and does not require the Schema
Registry. It supports both binary (encoded as base64 so it can be
Ewen,
Thanks for the explanation.
We have control over the logs format coming to HAProxy. Right now, these are
plain JSON logs (just like syslog messages with few additional meta
information) sent to HAProxy from remote clients using HTTPs. No serialization
is used.
Currently, we have one
Hemanth,
The Confluent Platform 1.0 version of have JSON embedded format support
(i.e. direct embedding of JSON messages), but you can serialize, base64
encode, and use the binary mode, paying a bit of overhead. However, since
then we merged a patch to add JSON support:
16 matches
Mail list logo