[ 
https://issues.apache.org/jira/browse/KAFKA-9787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17077834#comment-17077834
 ] 

Jordan Moore edited comment on KAFKA-9787 at 4/8/20, 4:57 AM:
--------------------------------------------------------------

How did you manage to uncleanly terminate a broker? Did the machine lose power?


was (Author: cricket007):
How did you manage to uncleaning terminate a broker? Did the machine lose power?

> The single node  broker is not coming up after unclean shutdown 
> ----------------------------------------------------------------
>
>                 Key: KAFKA-9787
>                 URL: https://issues.apache.org/jira/browse/KAFKA-9787
>             Project: Kafka
>          Issue Type: Bug
>          Components: admin
>    Affects Versions: 2.3.0
>         Environment: Kafka version 2.3.0
> zookeeper.version=3.4.14
> OS: ubuntu 16.94
> Kernel version - 4.4.0-131-generi
>            Reporter: etaven
>            Priority: Major
>             Fix For: 2.3.0
>
>
> After unclean shutdown , the kakfa  broker single node cluster is not coming 
> up. The logs are shown below. Requesting you to explain the reason for this 
> behaviour 
> ___________________
> [2020-02-12 12:43:13,532] INFO [ProducerStateManager 
> partition=__consumer_offsets-23] Loading producer state from snapshot file 
> '/bitnami/kafka/data/__consumer_offsets-23/00000000000000000005.snapshot' 
> (kafka.log.ProducerStateManager) [2020-02-12 12:43:13,532] INFO [Log 
> partition=__consumer_offsets-23, dir=/bitnami/kafka/data] Completed load of 
> log with 1 segments, log start offset 0 and log end offset 5 in 60 ms 
> (kafka.log.Log) [2020-02-12 12:43:13,537] INFO Logs loading complete in 1809 
> ms. (kafka.log.LogManager) [2020-02-12 12:43:13,548] INFO Starting log 
> cleanup with a period of 300000 ms. (kafka.log.LogManager) [2020-02-12 
> 12:43:13,549] INFO Starting log flusher with a default period of 
> 9223372036854775807 ms. (kafka.log.LogManager) [2020-02-12 12:43:13,912] INFO 
> Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor) 
> [2020-02-12 12:43:13,952] INFO [SocketServer brokerId=1001] Created 
> data-plane acceptor and processors for endpoint : 
> EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) 
> (kafka.network.SocketServer) [2020-02-12 12:43:13,954] INFO [SocketServer 
> brokerId=1001] Started 1 acceptor threads for data-plane 
> (kafka.network.SocketServer) [2020-02-12 12:43:13,981] INFO 
> [ExpirationReaper-1001-Produce]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:13,984] INFO [ExpirationReaper-1001-Fetch]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:13,986] INFO [ExpirationReaper-1001-DeleteRecords]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:13,988] INFO [ExpirationReaper-1001-ElectPreferredLeader]: Starting 
> (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) [2020-02-12 
> 12:43:14,024] ERROR [KafkaServer id=1001] Fatal error during KafkaServer 
> startup. Prepare to shutdown (kafka.server.KafkaServer) 
> java.nio.file.FileSystemException: 
> /bitnami/kafka/data/replication-offset-checkpoint: Input/output error at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:91) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) at 
> sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) at 
> sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
>  at java.nio.file.Files.newByteChannel(Files.java:361) at 
> java.nio.file.Files.createFile(Files.java:632) at 
> kafka.server.checkpoints.CheckpointFile.<init>(CheckpointFile.scala:45) at 
> kafka.server.checkpoints.OffsetCheckpointFile.<init>(OffsetCheckpointFile.scala:56)
>  at kafka.server.ReplicaManager$$anonfun$6.apply(ReplicaManager.scala:191) at 
> kafka.server.ReplicaManager$$anonfun$6.apply(ReplicaManager.scala:190) at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>  at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>  at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:104) at 
> kafka.server.ReplicaManager.<init>(ReplicaManager.scala:190) at 
> kafka.server.ReplicaManager.<init>(ReplicaManager.scala:165) at 
> kafka.server.KafkaServer.createReplicaManager(KafkaServer.scala:356) at 
> kafka.server.KafkaServer.startup(KafkaServer.scala:258) at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38) at 
> kafka.Kafka$.main(Kafka.scala:84) at kafka.Kafka.main(Kafka.scala) 
> [2020-02-12 12:43:14,027] INFO [KafkaServer id=1001] shutting down 
> (kafka.server.KafkaServer) [2020-02-12 12:43:14,028] INFO [SocketServer 
> brokerId=1001] Stopping socket server request processors 
> (kafka.network.SocketServer) [2020-02-12 12:43:14,033] INFO [SocketServer 
> brokerId=1001] Stopped socket server request processors 
> (kafka.network.SocketServer)
> ___________________________________________________________
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to