Wouldn't KAFKA-5494 make remote produce more reliable?
Original message From: Todd Palino Date:
9/14/17 6:53 PM (GMT-08:00) To: users@kafka.apache.org Subject: Re: Kafka
MirrorMaker - target or source datacenter deployment
Always in the target datacenter. While you can set up
Always in the target datacenter. While you can set up mirror maker for no
data loss operation, it’s still a good idea to put the connection more
likely to fail (remote) on the consumer side. Additionally, there are
significant performance problems with setting it up for remote produce as
you must r
Many of the descriptions and diagrams online describe deploying Kafka
MirrorMaker into the target data center (near the target Kafka cluster).
Since MirrorMaker is supposed to not lose messages, does it matter which
data center MirrorMaker is deployed in--source or target data center (with
any Kafk
>From MirrorMaker.scala :
// Defaults to no data loss settings.
maybeSetDefaultProperty(producerProps, ProducerConfig.RETRIES_CONFIG,
Int.MaxValue.toString)
maybeSetDefaultProperty(producerProps,
ProducerConfig.MAX_BLOCK_MS_CONFIG, Long.MaxValue.toString)
I think the settings wo
I am reading the current MirrorMaker code and am trying to understand if
MirrorMaker has any chance at losing messages. With the usage of the Max
value for ProducerConfig.MAX_BLOCK_MS_CONFIG and ProducerConfig.RETRIES_CONFIG
settings, it appears that the producer.flush() call in
maybeFlushAndComm
I have 2 dedicated ports on Nginx that accepts filebeat messages in SSL format,
it then forward those messages to those 2 Kafka brokers in PLAINTEXT. The Nginx
server does accept traffic on other ports, but those traffic are never
forwarded to Kafka brokers. And the 2 Kafka brokers only listen o
parties = ports *
On Thu, Sep 14, 2017 at 8:04 PM, Ali Akhtar wrote:
> I would try to put the SSL on different ports than what you're sending
> kafka to. Make sure the kafka ports don't do anything except communicate in
> plaintext, put all 3rd parties on different parties.
>
>
> On Thu, Sep 14,
I would try to put the SSL on different ports than what you're sending
kafka to. Make sure the kafka ports don't do anything except communicate in
plaintext, put all 3rd parties on different parties.
On Thu, Sep 14, 2017 at 7:23 PM, Yongtao You wrote:
> Does the following message mean broker 6
Does the following message mean broker 6 is having trouble talking to broker
7? Broker 6's advertised listener is "PLAINTEXT://nginx:9906" and Broker 7's
advertised listener is "PLAINTEXT://nginx:9907". However, on nginx server, port
9906 and 9907 are both SSL ports because that's what producer
You are correct, that error message was a result of my misconfiguration. I've
corrected that. Although filebeat still can't send messages to Kafka. In the
Nginx log, I see the following:
2017/09/14 21:35:09 [info] 4030#4030: *60056 SSL_do_handshake() failed (SSL:
error:140760FC:SSL routines:SS
If you ssh to the server where you got this error, are you able to ping the
ip of node 7 on the port its trying to reach?
On Thu, Sep 14, 2017 at 5:20 PM, Yongtao You wrote:
> I'm getting a lot of these in the server.log:
>
>
> [2017-09-14 20:18:32,753] WARN Connection to node 7 could not be
> e
I'm getting a lot of these in the server.log:
[2017-09-14 20:18:32,753] WARN Connection to node 7 could not be established.
Broker may not be available. (org.apache.kafka.clients.NetworkClient)
where node 7 is another broker in the cluster.
Thanks.
-Yongtao
On Thursday, September 14, 201
;Ok. I will inspect this further and keep everyone posted on this.
-Sameer.
On Thu, Sep 14, 2017 at 1:46 AM, Guozhang Wang wrote:
> When exactly_once is turned on the transactional id would be set
> automatically by the Streams client.
>
> What I'd inspect is the healthiness of the brokers sinc
I got errors saying the other brokers are not reachable, or something like
that. Let me dig up the exact error messages. I am guessing the problem was
that the advertised listeners are of PLAINTEXT format, but the Nginx requires
SSL. But I could be wrong.
Thanks!
-Yongtao
On Thursday, Se
That sounds like a viable option.
Thanks!
-Yongtao
On Thursday, September 14, 2017, 7:47:14 PM GMT+8, Jorge Pérez
wrote:
Hi!
I ask: Wouldn't it be more advisable that you send metrics through logtash
sending directly to kafka brokers without going through Nginx and mounting
a virtua
How do you know that the brokers don't talk to each other?
On Thu, Sep 14, 2017 at 4:32 PM, Yongtao You
wrote:
> Hi,
> I would like to know the right way to setup a Kafka cluster with Nginx in
> front of it as a reverse proxy. Let's say I have 2 Kafka brokers running on
> 2 different hosts; and
Hi!
I ask: Wouldn't it be more advisable that you send metrics through logtash
sending directly to kafka brokers without going through Nginx and mounting
a virtual ip (corosync/pacemaker) in the kafka cluster?
Regards!
2017-09-14 13:32 GMT+02:00 Yongtao You :
> Hi,
> I would like to know the ri
Hi,
I would like to know the right way to setup a Kafka cluster with Nginx in front
of it as a reverse proxy. Let's say I have 2 Kafka brokers running on 2
different hosts; and an Nginx server running on another host. Nginx will listen
on 2 different ports, and each will forward to one Kafka bro
Perhaps a clarification to what Damian said:
It is shown in the (HTML) table at the link you shared [1] what happens
when you get null values for a key.
We also have slightly better join documentation at [2], the content/text of
which we are currently migrating over to the official Apache Kafka
d
19 matches
Mail list logo