[ 
https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16601613#comment-16601613
 ] 

Stephane Maarek commented on KAFKA-1194:
----------------------------------------

[~lindong] Not fixed as of 
{code}
[2018-09-02 11:03:12,789] INFO Kafka version : 2.1.0-SNAPSHOT 
(org.apache.kafka.common.utils.AppInfoParser)
[2018-09-02 11:03:12,789] INFO Kafka commitId : 7299e18369999ba2 
(org.apache.kafka.common.utils.AppInfoParser)
{code}

Here are the full steps to reproduce and trigger:
{code}
C:\kafka_2.11-2.1.0-SNAPSHOT>kafka-topics.bat --zookeeper 127.0.0.1:2181 
--topic second_topic --create --partitions 3 --replication-factor 1
WARNING: Due to limitations in metric names, topics with a period ('.') or 
underscore ('_') could collide. To avoid issues it is best to use either, but 
not both.
Created topic "second_topic".

C:\kafka_2.11-2.1.0-SNAPSHOT>kafka-console-producer.bat --broker-list 
127.0.0.1:9092 --topic second_topic
>hello
>world
>hello
>Terminate batch job (Y/N)? Y

C:\kafka_2.11-2.1.0-SNAPSHOT>kafka-topics.bat --zookeeper 127.0.0.1:2181 
--topic second_topic --delete
Topic second_topic is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
{code}

Which triggers Kafka shutdown with:
{code}
[2018-09-02 11:04:15,460] ERROR Error while renaming dir for second_topic-1 in 
log dir C:\kafka_2.11-2.1.0-SNAPSHOT\data\kafka 
(kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: 
C:\kafka_2.11-2.1.0-SNAPSHOT\data\kafka\second_topic-1 -> 
C:\kafka_2.11-2.1.0-SNAPSHOT\data\kafka\second_topic-1.d1ceee24d7474152b6fedd61903449e5-delete
        at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
        at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
        at sun.nio.fs.WindowsFileCopy.move(Unknown Source)
        at sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source)
        at java.nio.file.Files.move(Unknown Source)
        at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:809)
        at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:689)
        at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687)
        at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:687)
        at kafka.log.Log.maybeHandleIOException(Log.scala:1842)
        at kafka.log.Log.renameDir(Log.scala:687)
        at kafka.log.LogManager.asyncDelete(LogManager.scala:833)
        at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:271)
        at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:265)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:251)
        at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:259)
        at kafka.cluster.Partition.delete(Partition.scala:265)
        at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:340)
        at 
kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:370)
        at 
kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:368)
        at scala.collection.Iterator$class.foreach(Iterator.scala:891)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:368)
        at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:200)
        at kafka.server.KafkaApis.handle(KafkaApis.scala:111)
        at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
        at java.lang.Thread.run(Unknown Source)
        Suppressed: java.nio.file.AccessDeniedException: 
C:\kafka_2.11-2.1.0-SNAPSHOT\data\kafka\second_topic-1 -> 
C:\kafka_2.11-2.1.0-SNAPSHOT\data\kafka\second_topic-1.d1ceee24d7474152b6fedd61903449e5-delete
                at sun.nio.fs.WindowsException.translateToIOException(Unknown 
Source)
                at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown 
Source)
                at sun.nio.fs.WindowsFileCopy.move(Unknown Source)
                at sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source)
                at java.nio.file.Files.move(Unknown Source)
                at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:806)
                ... 23 more
[2018-09-02 11:04:15,460] INFO [ReplicaManager broker=0] Stopping serving 
replicas in dir C:\kafka_2.11-2.1.0-SNAPSHOT\data\kafka 
(kafka.server.ReplicaManager)
{code}

> The kafka broker cannot delete the old log files after the configured time
> --------------------------------------------------------------------------
>
>                 Key: KAFKA-1194
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1194
>             Project: Kafka
>          Issue Type: Bug
>          Components: log
>    Affects Versions: 0.10.0.0, 0.11.0.0, 1.0.0
>         Environment: window
>            Reporter: Tao Qin
>            Priority: Critical
>              Labels: features, patch, windows
>         Attachments: KAFKA-1194.patch, Untitled.jpg, kafka-1194-v1.patch, 
> kafka-1194-v2.patch, screenshot-1.png
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We tested it in windows environment, and set the log.retention.hours to 24 
> hours.
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=24
> After several days, the kafka broker still cannot delete the old log file. 
> And we get the following exceptions:
> [2013-12-19 01:57:38,528] ERROR Uncaught exception in scheduled task 
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix from 
>  to .deleted for log segment 1516723
>          at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>          at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:638)
>          at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:629)
>          at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>          at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>          at 
> scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
>          at scala.collection.immutable.List.foreach(List.scala:76)
>          at kafka.log.Log.deleteOldSegments(Log.scala:418)
>          at 
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:284)
>          at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:316)
>          at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:314)
>          at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
>          at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>          at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
>          at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
>          at 
> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:615)
>          at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
>          at kafka.log.LogManager.cleanupLogs(LogManager.scala:314)
>          at 
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:143)
>          at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>          at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>          at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>          at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>          at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>          at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>          at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          at java.lang.Thread.run(Thread.java:724)
> I think this error happens because kafka tries to rename the log file when it 
> is still opened.  So we should close the file first before rename.
> The index file uses a special data structure, the MappedByteBuffer. Javadoc 
> describes it as:
> A mapped byte buffer and the file mapping that it represents remain valid 
> until the buffer itself is garbage-collected.
> Fortunately, I find a forceUnmap function in kafka code, and perhaps it can 
> be used to free the MappedByteBuffer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to