[ 
https://issues.apache.org/jira/browse/KAFKA-6985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17000779#comment-17000779
 ] 

Radoslaw Gasiorek edited comment on KAFKA-6985 at 12/20/19 9:56 AM:
--------------------------------------------------------------------

up, we had a MIM because of this (on Kafka .2.2)

likly root couse - a temp. latency/network  issue caused some broker nodes 
disconnecting from the cluster which never fully joining back.  In the same 
time remaining leaders for some partitions. The initial problems started 
similarities to these
 https://issues.apache.org/jira/browse/KAFKA-7165 and 
https://issues.apache.org/jira/browse/KAFKA-6584


was (Author: rgasiorek):
up, we had a MIM because of this (on Kafka .2.2)

likly root couse - a temp. latency/network  issue caused some broker nodes 
disconnecting from the cluster which never fully joining back.  In the same 
time remaining leaders for some partitions. Possibly related to 
 https://issues.apache.org/jira/browse/KAFKA-7165 and 
https://issues.apache.org/jira/browse/KAFKA-6584

> Error connection between cluster node
> -------------------------------------
>
>                 Key: KAFKA-6985
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6985
>             Project: Kafka
>          Issue Type: Bug
>          Components: KafkaConnect
>         Environment: Centos-7
>            Reporter: Ranjeet Ranjan
>            Priority: Major
>
> Hi Have setup multi-node Kafka cluster but getting an error while connecting 
> one node to another although there is an issue with firewall or port. I am 
> able to telnet 
> WARN [ReplicaFetcherThread-0-1], Error in fetch 
> Kafka.server.ReplicaFetcherThread$FetchRequest@8395951 
> (Kafka.server.ReplicaFetcherThread)
> java.io.IOException: Connection to Kafka-1:9092 (id: 1 rack: null) failed
>  
> {code:java}
>  
> at 
> kafka.utils.NetworkClientBlockingOps$.awaitReady$1(NetworkClientBlockingOps.scala:84)
> at 
> kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:94)
> at 
> kafka.server.ReplicaFetcherThread.sendRequest(ReplicaFetcherThread.scala:244)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:234)
> at kafka.server.ReplicaFetcherThread.fetch(ReplicaFetcherThread.scala:42)
> at 
> kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:118)
> at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:103)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
> {code}
> Here you go server.properties
> Node:1
>  
> {code:java}
> ############################# Server Basics #############################
> # The id of the broker. This must be set to a unique integer for each broker.
> broker.id=1
> # Switch to enable topic deletion or not, default value is false
> delete.topic.enable=true
> ############################# Socket Server Settings 
> #############################
> listeners=PLAINTEXT://kafka-1:9092
> advertised.listeners=PLAINTEXT://kafka-1:9092
> #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
> # The number of threads handling network requests
> num.network.threads=3
> # The number of threads doing disk I/O
> num.io.threads=8
> # The send buffer (SO_SNDBUF) used by the socket server
> socket.send.buffer.bytes=102400
> # The receive buffer (SO_RCVBUF) used by the socket server
> socket.receive.buffer.bytes=102400
> # The maximum size of a request that the socket server will accept 
> (protection against OOM)
> socket.request.max.bytes=104857600
> ############################# Log Basics #############################
> # A comma seperated list of directories under which to store log files
> log.dirs=/var/log/kafka
> # The default number of log partitions per topic. More partitions allow 
> greater
> # parallelism for consumption, but this will also result in more files across
> # the brokers.
> num.partitions=1
> # The number of threads per data directory to be used for log recovery at 
> startup and flushing at shutdown.
> # This value is recommended to be increased for installations with data dirs 
> located in RAID array.
> num.recovery.threads.per.data.dir=1
> ############################# Log Retention Policy 
> #############################
> # The minimum age of a log file to be eligible for deletion due to age
> log.retention.hours=48
> # A size-based retention policy for logs. Segments are pruned from the log as 
> long as the remaining
> # segments don't drop below log.retention.bytes. Functions independently of 
> log.retention.hours.
> log.retention.bytes=1073741824
> # The maximum size of a log segment file. When this size is reached a new log 
> segment will be created.
> log.segment.bytes=1073741824
> # The interval at which log segments are checked to see if they can be 
> deleted according
> # to the retention policies
> log.retention.check.interval.ms=300000
> ############################# Zookeeper #############################
> # root directory for all kafka znodes.
> zookeeper.connect=10.130.82.28:2181
> # Timeout in ms for connecting to zookeeper
> zookeeper.connection.timeout.ms=6000
> {code}
>  
>  
> Node-2
> {code:java}
> ############################# Server Basics #############################
> # The id of the broker. This must be set to a unique integer for each broker.
> broker.id=2
> # Switch to enable topic deletion or not, default value is false
> delete.topic.enable=true
> ############################# Socket Server Settings 
> #############################
> listeners=PLAINTEXT://kafka-2:9092
> advertised.listeners=PLAINTEXT://kafka-2:9092
> #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
> # The number of threads handling network requests
> num.network.threads=3
> # The number of threads doing disk I/O
> num.io.threads=8
> # The send buffer (SO_SNDBUF) used by the socket server
> socket.send.buffer.bytes=102400
> # The receive buffer (SO_RCVBUF) used by the socket server
> socket.receive.buffer.bytes=102400
> # The maximum size of a request that the socket server will accept 
> (protection against OOM)
> socket.request.max.bytes=104857600
> ############################# Log Basics #############################
> # A comma seperated list of directories under which to store log files
> log.dirs=/var/log/kafka
> # The default number of log partitions per topic. More partitions allow 
> greater
> # parallelism for consumption, but this will also result in more files across
> # the brokers.
> num.partitions=1
> # The number of threads per data directory to be used for log recovery at 
> startup and flushing at shutdown.
> # This value is recommended to be increased for installations with data dirs 
> located in RAID array.
> num.recovery.threads.per.data.dir=1
> ############################# Log Retention Policy 
> #############################
> # The minimum age of a log file to be eligible for deletion due to age
> log.retention.hours=48
> # A size-based retention policy for logs. Segments are pruned from the log as 
> long as the remaining
> # segments don't drop below log.retention.bytes. Functions independently of 
> log.retention.hours.
> log.retention.bytes=1073741824
> # The maximum size of a log segment file. When this size is reached a new log 
> segment will be created.
> log.segment.bytes=1073741824
> # The interval at which log segments are checked to see if they can be 
> deleted according
> # to the retention policies
> log.retention.check.interval.ms=300000
> ############################# Zookeeper #############################
> # root directory for all kafka znodes.
> zookeeper.connect=10.130.82.28:2181
> # Timeout in ms for connecting to zookeeper
> zookeeper.connection.timeout.ms=6000
> {code}
>  
> Node- 3
>  
> {code:java}
> ############################# Server Basics #############################
> # The id of the broker. This must be set to a unique integer for each broker.
> broker.id=3
> # Switch to enable topic deletion or not, default value is false
> delete.topic.enable=true
> ############################# Socket Server Settings 
> #############################
> listeners=PLAINTEXT://kafka-3:9092
> advertised.listeners=PLAINTEXT://kafka-3:9092
> #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
> # The number of threads handling network requests
> num.network.threads=3
> # The number of threads doing disk I/O
> num.io.threads=8
> # The send buffer (SO_SNDBUF) used by the socket server
> socket.send.buffer.bytes=102400
> # The receive buffer (SO_RCVBUF) used by the socket server
> socket.receive.buffer.bytes=102400
> # The maximum size of a request that the socket server will accept 
> (protection against OOM)
> socket.request.max.bytes=104857600
> ############################# Log Basics #############################
> # A comma seperated list of directories under which to store log files
> log.dirs=/var/log/kafka
> # The default number of log partitions per topic. More partitions allow 
> greater
> # parallelism for consumption, but this will also result in more files across
> # the brokers.
> num.partitions=1
> # The number of threads per data directory to be used for log recovery at 
> startup and flushing at shutdown.
> # This value is recommended to be increased for installations with data dirs 
> located in RAID array.
> num.recovery.threads.per.data.dir=1
> ############################# Log Retention Policy 
> #############################
> # The minimum age of a log file to be eligible for deletion due to age
> log.retention.hours=48
> # A size-based retention policy for logs. Segments are pruned from the log as 
> long as the remaining
> # segments don't drop below log.retention.bytes. Functions independently of 
> log.retention.hours.
> log.retention.bytes=1073741824
> # The maximum size of a log segment file. When this size is reached a new log 
> segment will be created.
> log.segment.bytes=1073741824
> # The interval at which log segments are checked to see if they can be 
> deleted according
> # to the retention policies
> log.retention.check.interval.ms=300000
> ############################# Zookeeper #############################
> # root directory for all kafka znodes.
> zookeeper.connect=10.130.82.28:2181
> # Timeout in ms for connecting to zookeeper
> zookeeper.connection.timeout.ms=6000
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to