[ 
https://issues.apache.org/jira/browse/KAFKA-6901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16695985#comment-16695985
 ] 

Grégory R. commented on KAFKA-6901:
-----------------------------------

It was an antivirus issue.

Closing the request.

> Kafka crashes when trying to delete segment when retetention time is reached 
> -----------------------------------------------------------------------------
>
>                 Key: KAFKA-6901
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6901
>             Project: Kafka
>          Issue Type: Bug
>          Components: core, log
>    Affects Versions: 1.0.0, 1.1.0
>         Environment: OS: Windows Server 2012 R2
>            Reporter: Grégory R.
>            Priority: Major
>              Labels: windows
>         Attachments: 20180517 - ProcessExplorer after the crash.png, 20180517 
> - ProcessExplorer before the crash.png
>
>
> Following the parameter
> {code:java}
> log.retention.hours = 16{code}
> kafka tries to delete segments.
> This action crashes the server with following log:
>  
> {code:java}
> [2018-05-11 15:17:58,036] INFO Found deletable segments with base offsets [0] 
> due to retention time 604800000ms breach (kafka.log.Log)
> [2018-05-11 15:17:58,068] INFO Rolled new log segment for 'event-0' in 12 ms. 
> (kafka.log.Log)
> [2018-05-11 15:17:58,068] INFO Scheduling log segment 0 for log event-0 for 
> deletion. (kafka.log.Log)
> [2018-05-11 15:17:58,068] ERROR Error while deleting segments for event-0 in 
> dir C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log (kafka.server.L
> ogDirFailureChannel)
> java.nio.file.FileSystemException: 
> C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\00000000000000000000.log 
> -> C:\App\VISBridge\kafka_2.
> 12-1.0.0\kafka-log\event-0\00000000000000000000.log.deleted: Le processus ne 
> peut pas accÚder au fichier car ce fichier est utilisÚ par un a
> utre processus.
>         at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>         at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>         at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>         at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>         at java.nio.file.Files.move(Files.java:1395)
>         at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:682)
>         at 
> org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
>         at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:398)
>         at kafka.log.Log.asyncDeleteSegment(Log.scala:1592)
>         at kafka.log.Log.deleteSegment(Log.scala:1579)
>         at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1152)
>         at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1152)
>         at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>         at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>         at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>         at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1152)
>         at 
> scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
>         at kafka.log.Log.maybeHandleIOException(Log.scala:1669)
>         at kafka.log.Log.deleteSegments(Log.scala:1143)
>         at kafka.log.Log.deleteOldSegments(Log.scala:1138)
>         at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1211)
>         at kafka.log.Log.deleteOldSegments(Log.scala:1204)
>         at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:715)
>         at 
> kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:713)
>         at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
>         at scala.collection.Iterator.foreach(Iterator.scala:929)
>         at scala.collection.Iterator.foreach$(Iterator.scala:929)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
>         at scala.collection.IterableLike.foreach(IterableLike.scala:71)
>         at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
>         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>         at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
>         at kafka.log.LogManager.cleanupLogs(LogManager.scala:713)
>         at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:341)
>         at 
> kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
>         at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>         Suppressed: java.nio.file.FileSystemException: 
> C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\00000000000000000000.log 
> -> C:\Ap
> p\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\00000000000000000000.log.deleted:
>  Le processus ne peut pas accÚder au fichier car ce fichier
> est utilisÚ par un autre processus.
>                 at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>                 at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>                 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
>                 at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>                 at java.nio.file.Files.move(Files.java:1395)
>                 at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:679)
>                 ... 37 more
> [2018-05-11 15:17:58,083] INFO [ReplicaManager broker=0] Stopping serving 
> replicas in dir C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log (kafka
> .server.ReplicaManager)
> [2018-05-11 15:17:58,083] ERROR Uncaught exception in scheduled task 
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> org.apache.kafka.common.errors.KafkaStorageException: Error while deleting 
> segments for event-0 in dir C:\App\VISBridge\kafka_2.12-1.0.0\kaf
> ka-log
> Caused by: java.nio.file.FileSystemException: 
> C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\00000000000000000000.log 
> -> C:\App\VISBrid
> ge\kafka_2.12-1.0.0\kafka-log\event-0\00000000000000000000.log.deleted: Le 
> processus ne peut pas accÚder au fichier car ce fichier est utili
> sÚ par un autre processus.
>         at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>         at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>         at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
>         at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>         at java.nio.file.Files.move(Files.java:1395)
>         at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:682)
>         at 
> org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
>         at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:398)
>         at kafka.log.Log.asyncDeleteSegment(Log.scala:1592)
>         at kafka.log.Log.deleteSegment(Log.scala:1579)
>         at kafka.log.Log.$anonfun$deleteSegments$3(Log.scala:1152)
>         at kafka.log.Log.$anonfun$deleteSegments$3$adapted(Log.scala:1152)
>         at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
>         at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
>         at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
>         at kafka.log.Log.$anonfun$deleteSegments$2(Log.scala:1152)
>         at 
> scala.runtime.java8.JFunction0$mcI$sp.apply(JFunction0$mcI$sp.java:12)
>         at kafka.log.Log.maybeHandleIOException(Log.scala:1669)
>         at kafka.log.Log.deleteSegments(Log.scala:1143)
>         at kafka.log.Log.deleteOldSegments(Log.scala:1138)
>         at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1211)
>         at kafka.log.Log.deleteOldSegments(Log.scala:1204)
>         at kafka.log.LogManager.$anonfun$cleanupLogs$3(LogManager.scala:715)
>         at 
> kafka.log.LogManager.$anonfun$cleanupLogs$3$adapted(LogManager.scala:713)
>         at 
> scala.collection.TraversableLike$WithFilter.$anonfun$foreach$1(TraversableLike.scala:789)
>         at scala.collection.Iterator.foreach(Iterator.scala:929)
>         at scala.collection.Iterator.foreach$(Iterator.scala:929)
>         at scala.collection.AbstractIterator.foreach(Iterator.scala:1417)
>         at scala.collection.IterableLike.foreach(IterableLike.scala:71)
>         at scala.collection.IterableLike.foreach$(IterableLike.scala:70)
>         at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>         at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:788)
>         at kafka.log.LogManager.cleanupLogs(LogManager.scala:713)
>         at kafka.log.LogManager.$anonfun$startup$2(LogManager.scala:341)
>         at 
> kafka.utils.KafkaScheduler.$anonfun$schedule$2(KafkaScheduler.scala:110)
>         at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:61)
>         at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
>         at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
>         at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>         at java.lang.Thread.run(Thread.java:748)
>         Suppressed: java.nio.file.FileSystemException: 
> C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\00000000000000000000.log 
> -> C:\Ap
> p\VISBridge\kafka_2.12-1.0.0\kafka-log\event-0\00000000000000000000.log.deleted:
>  Le processus ne peut pas accÚder au fichier car ce fichier
> est utilisÚ par un autre processus.
>                 at 
> sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
>                 at 
> sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
>                 at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
>                 at 
> sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
>                 at java.nio.file.Files.move(Files.java:1395)
>                 at 
> org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:679)
>                 ... 37 more
> [2018-05-11 15:17:58,083] INFO [ReplicaManager broker=0] Partitions 
> __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-8,__consu
> mer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets
> -25,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,output-0,__consumer_offset
> s-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__co
> nsumer_offsets-18,__consumer_offsets-37,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,__consumer_o
> ffsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,
> __consumer_offsets-14,event-0,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,d
> ata-0,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__co
> nsumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 are offline due 
> to failure on log directory C:\App\VISBridge\kafka_2.12-1.0.0\
> kafka-log (kafka.server.ReplicaManager)
> [2018-05-11 15:17:58,146] INFO [ReplicaFetcherManager on broker 0] Removed 
> fetcher for partitions __consumer_offsets-22,__consumer_offsets-3
> 0,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer
> _offsets-46,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets
> -49,output-0,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offset
> s-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__co
> nsumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-13,__consumer_offsets-2,__consumer_of
> fsets-43,__consumer_offsets-6,__consumer_offsets-14,event-0,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offs
> ets-39,__consumer_offsets-12,data-0,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets
> -29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40
>  (kafka.server.ReplicaFetcherManager)
> [2018-05-11 15:17:58,146] INFO [ReplicaManager broker=0] Broker 0 stopped 
> fetcher for partitions __consumer_offsets-22, __consumer_offsets-3
> 0, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, 
> __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __c
> onsumer_offsets-46, __consumer_offsets-25, __consumer_offsets-35, 
> __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __con
> sumer_offsets-49, output-0, __consumer_offsets-47, __consumer_offsets-16, 
> __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-3
> 6, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, 
> __consumer_offsets-37, __consumer_offsets-15, __consumer_offsets-24,
> __consumer_offsets-38, __consumer_offsets-17, __consumer_offsets-48, 
> __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-13, __
> consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, 
> __consumer_offsets-14, event-0, __consumer_offsets-20, __consumer_offsets-0
> , __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, 
> data-0, __consumer_offsets-45, __consumer_offsets-1, __consumer_offse
> ts-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, 
> __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-
> 40 because they are in the failed log dir 
> C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log (kafka.server.ReplicaManager)
> [2018-05-11 15:17:58,146] INFO Stopping serving logs in dir 
> C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log (kafka.log.LogManager)
> [2018-05-11 15:17:58,146] FATAL Shutdown broker because all log dirs in 
> C:\App\VISBridge\kafka_2.12-1.0.0\kafka-log have failed (kafka.log.L
> ogManager)
> C:\App\VISBridge\kafka_2.12-1.0.0\bin\windows>
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to