Vitaly Brodetskyi created AMBARI-17929:
------------------------------------------

             Summary: Kafka brokers went down after Ambari upgrade due to 
IllegalArgumentException
                 Key: AMBARI-17929
                 URL: https://issues.apache.org/jira/browse/AMBARI-17929
             Project: Ambari
          Issue Type: Bug
          Components: ambari-server
    Affects Versions: 2.4.0
            Reporter: Vitaly Brodetskyi
            Assignee: Vitaly Brodetskyi
            Priority: Blocker
             Fix For: 2.4.0


*Steps*
# Deploy HDP-2.4.2 cluster with Ambari 2.2.2.0
# Upgrade Ambari to 2.4.0.0
# Observe the status of Kafka brokers

*Result*
All brokers report down
Logs indicate below:
{code}
[2016-07-27 05:48:26,535] INFO Initializing Kafka Timeline Metrics Sink 
(org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter)
[2016-07-27 05:48:26,571] INFO Started Kafka Timeline metrics reporter with 
polling period 10 seconds 
(org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter)
[2016-07-27 05:48:26,716] INFO KafkaConfig values:
        request.timeout.ms = 30000
        log.roll.hours = 168
        inter.broker.protocol.version = 0.9.0.X
        log.preallocate = false
        security.inter.broker.protocol = PLAINTEXTSASL
        controller.socket.timeout.ms = 30000
        broker.id.generation.enable = true
        ssl.keymanager.algorithm = SunX509
        ssl.key.password = [hidden]
        log.cleaner.enable = true
        ssl.provider = null
        num.recovery.threads.per.data.dir = 1
        background.threads = 10
        unclean.leader.election.enable = true
        sasl.kerberos.kinit.cmd = /usr/bin/kinit
        replica.lag.time.max.ms = 10000
        ssl.endpoint.identification.algorithm = null
        auto.create.topics.enable = true
        zookeeper.sync.time.ms = 2000
        ssl.client.auth = none
        ssl.keystore.password = [hidden]
        log.cleaner.io.buffer.load.factor = 0.9
        offsets.topic.compression.codec = 0
        log.retention.hours = 168
        log.dirs = /kafka-logs
        ssl.protocol = TLS
        log.index.size.max.bytes = 10485760
        sasl.kerberos.min.time.before.relogin = 60000
        log.retention.minutes = null
        connections.max.idle.ms = 600000
        ssl.trustmanager.algorithm = PKIX
        offsets.retention.minutes = 86400000
        max.connections.per.ip = 2147483647
        replica.fetch.wait.max.ms = 500
        metrics.num.samples = 2
        port = 6667
        offsets.retention.check.interval.ms = 600000
        log.cleaner.dedupe.buffer.size = 134217728
        log.segment.bytes = 1073741824
        group.min.session.timeout.ms = 6000
        producer.purgatory.purge.interval.requests = 10000
        min.insync.replicas = 1
        ssl.truststore.password = [hidden]
        log.flush.scheduler.interval.ms = 9223372036854775807
        socket.receive.buffer.bytes = 102400
        leader.imbalance.per.broker.percentage = 10
        num.io.threads = 8
        zookeeper.connect = 
nats11-36-alzs-dgm10toeriedwngdha-s11-3.openstacklocal:2181,nats11-36-alzs-dgm10toeriedwngdha-s11-4.openstacklocal:2181,nats11-36-alzs-dgm10toeriedwngdha-s11-1.openstacklocal:2181
        queued.max.requests = 500
        offsets.topic.replication.factor = 3
        replica.socket.timeout.ms = 30000
        offsets.topic.segment.bytes = 104857600
        replica.high.watermark.checkpoint.interval.ms = 5000
        broker.id = -1
        ssl.keystore.location = /etc/security/serverKeys/keystore.jks
        listeners = 
PLAINTEXT://nats11-36-alzs-dgm10toeriedwngdha-s11-1.openstacklocal:6667,SSL://nats11-36-alzs-dgm10toeriedwngdha-s11-1.openstacklocal:6666
        log.flush.interval.messages = 9223372036854775807
        principal.builder.class = class 
org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
        log.retention.ms = null
        offsets.commit.required.acks = -1
        sasl.kerberos.principal.to.local.rules = [DEFAULT]
        group.max.session.timeout.ms = 30000
        num.replica.fetchers = 1
        advertised.listeners = 
PLAINTEXT://nats11-36-alzs-dgm10toeriedwngdha-s11-1.openstacklocal:6667,SSL://nats11-36-alzs-dgm10toeriedwngdha-s11-1.openstacklocal:6666
        replica.socket.receive.buffer.bytes = 65536
        delete.topic.enable = false
        log.index.interval.bytes = 4096
        metric.reporters = []
        compression.type = producer
        log.cleanup.policy = delete
        controlled.shutdown.max.retries = 3
        log.cleaner.threads = 1
        quota.window.size.seconds = 1
        zookeeper.connection.timeout.ms = 25000
        offsets.load.buffer.size = 5242880
        zookeeper.session.timeout.ms = 30000
        ssl.cipher.suites = null
        authorizer.class.name = 
org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer
        sasl.kerberos.ticket.renew.jitter = 0.05
        sasl.kerberos.service.name = null
        controlled.shutdown.enable = true
        offsets.topic.num.partitions = 50
        quota.window.num = 11
        message.max.bytes = 1000000
        log.cleaner.backoff.ms = 15000
        log.roll.jitter.hours = 0
        log.retention.check.interval.ms = 300000
        replica.fetch.max.bytes = 1048576
        log.cleaner.delete.retention.ms = 86400000
        fetch.purgatory.purge.interval.requests = 10000
        log.cleaner.min.cleanable.ratio = 0.5
        offsets.commit.timeout.ms = 5000
        zookeeper.set.acl = false
        log.retention.bytes = -1
        offset.metadata.max.bytes = 4096
        leader.imbalance.check.interval.seconds = 300
        quota.consumer.default = 9223372036854775807
        log.roll.jitter.ms = null
        reserved.broker.max.id = 1000
        replica.fetch.backoff.ms = 1000
        advertised.host.name = null
        quota.producer.default = 9223372036854775807
        log.cleaner.io.buffer.size = 524288
        controlled.shutdown.retry.backoff.ms = 5000
        log.dir = /tmp/kafka-logs
        log.flush.offset.checkpoint.interval.ms = 60000
        log.segment.delete.delay.ms = 60000
        num.partitions = 1
        num.network.threads = 3
        socket.request.max.bytes = 104857600
        sasl.kerberos.ticket.renew.window.factor = 0.8
        log.roll.ms = null
        ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
        socket.send.buffer.bytes = 102400
        log.flush.interval.ms = null
        ssl.truststore.location = /etc/security/serverKeys/truststore.jks
        log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
        default.replication.factor = 1
        metrics.sample.window.ms = 30000
        auto.leader.rebalance.enable = true
        host.name =
        ssl.truststore.type = JKS
        advertised.port = null
        max.connections.per.ip.overrides =
        replica.fetch.min.bytes = 1
        ssl.keystore.type = JKS
 (kafka.server.KafkaConfig)
[2016-07-27 05:48:26,804] FATAL  (kafka.Kafka$)
java.lang.IllegalArgumentException: requirement failed: 
security.inter.broker.protocol must be a protocol in the configured set of 
advertised.listeners. The valid options based on currently configured protocols 
are Set(PLAINTEXT, SSL)
        at scala.Predef$.require(Predef.scala:233)
        at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:957)
        at kafka.server.KafkaConfig.<init>(KafkaConfig.scala:935)
        at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:699)
        at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:696)
        at 
kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28)
        at kafka.Kafka$.main(Kafka.scala:58)
        at kafka.Kafka.main(Kafka.scala)
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to