[ 
https://issues.apache.org/jira/browse/KAFKA-1194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15377110#comment-15377110
 ] 

Ravi Peri commented on KAFKA-1194:
----------------------------------

The patch is ready. I am working on few failing unit tests. Once I get a 
100%test pass I will submit a PR.

Get Outlook for Android<https://aka.ms/ghei36>



On Thu, Jul 14, 2016 at 7:36 AM -0700, "Harald Kirsch (JIRA)" 
<j...@apache.org<mailto:j...@apache.org>> wrote:


    [ 
https://na01.safelinks.protection.outlook.com/?url=https%3a%2f%2fissues.apache.org%2fjira%2fbrowse%2fKAFKA-1194%3fpage%3dcom.atlassian.jira.plugin.system.issuetabpanels%3acomment-tabpanel%26focusedCommentId%3d15377023%23comment-15377023&data=01%7c01%7craviperi%40microsoft.com%7cce90d4ab0ce5422645ef08d3abf4396e%7c72f988bf86f141af91ab2d7cd011db47%7c1&sdata=lbnx5babFf9ysoIpLXxNjHdn8NCoXryf0CcdDVOymtE%3d
 ]

Harald Kirsch commented on KAFKA-1194:
--------------------------------------

Having a very similar problem I would hope that the fix fixes this one too. The 
message and stacktrace is slightly different. We are using the logcleaner with 
compaction and get the below stack trace.

This is on Windows. The claim of the error message that another process has the 
file open is misleading. I verified with procexp and handle search that only 
the Kafka process has the file open, so it is likely blocking itself on this.

Any chance that the patch will fix this one too?
{noformat}
[2016-07-14 16:09:20,568] ERROR [kafka-log-cleaner-thread-0], Error due to  
(kafka.log.LogCleaner)
kafka.common.KafkaStorageException: Failed to change the log file suffix from 
.cleaned to .swap for log segment 0
        at kafka.log.LogSegment.kafkaStorageException$1(LogSegment.scala:263)
        at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:265)
        at kafka.log.Log.replaceSegments(Log.scala:869)
        at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:395)
        at kafka.log.Cleaner$$anonfun$clean$4.apply(LogCleaner.scala:343)
        at kafka.log.Cleaner$$anonfun$clean$4.apply(LogCleaner.scala:342)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at kafka.log.Cleaner.clean(LogCleaner.scala:342)
        at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:237)
        at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:215)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
Caused by: java.nio.file.FileSystemException: 
d:\Search\kafka\__consumer_offsets-40\00000000000000000000.log.cleaned -> 
d:\Search\kafka\__consumer_offsets-40\00000000000000000000.log.swap: The 
process cannot access the file because it is being used by another process.

        at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
        at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
        at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
        at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
        at java.nio.file.Files.move(Files.java:1395)
        at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:670)
        at kafka.log.FileMessageSet.renameTo(FileMessageSet.scala:364)
        ... 10 more
        Suppressed: java.nio.file.FileSystemException: 
d:\Search\kafka\__consumer_offsets-40\00000000000000000000.log.cleaned -> 
d:\Search\kafka\__consumer_offsets-40\00000000000000000000.log.swap: The 
process cannot access the file because it is being used by another process.

                at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
                at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
                at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
                at 
sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
                at java.nio.file.Files.move(Files.java:1395)
                at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:667)
                ... 11 more
{noformat}






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


> The kafka broker cannot delete the old log files after the configured time
> --------------------------------------------------------------------------
>
>                 Key: KAFKA-1194
>                 URL: https://issues.apache.org/jira/browse/KAFKA-1194
>             Project: Kafka
>          Issue Type: Bug
>          Components: log
>    Affects Versions: 0.8.1
>         Environment: window
>            Reporter: Tao Qin
>            Assignee: Jay Kreps
>              Labels: features, patch
>             Fix For: 0.10.1.0
>
>         Attachments: KAFKA-1194.patch, kafka-1194-v1.patch, 
> kafka-1194-v2.patch
>
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> We tested it in windows environment, and set the log.retention.hours to 24 
> hours.
> # The minimum age of a log file to be eligible for deletion
> log.retention.hours=24
> After several days, the kafka broker still cannot delete the old log file. 
> And we get the following exceptions:
> [2013-12-19 01:57:38,528] ERROR Uncaught exception in scheduled task 
> 'kafka-log-retention' (kafka.utils.KafkaScheduler)
> kafka.common.KafkaStorageException: Failed to change the log file suffix from 
>  to .deleted for log segment 1516723
>          at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:249)
>          at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:638)
>          at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:629)
>          at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>          at kafka.log.Log$$anonfun$deleteOldSegments$1.apply(Log.scala:418)
>          at 
> scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
>          at scala.collection.immutable.List.foreach(List.scala:76)
>          at kafka.log.Log.deleteOldSegments(Log.scala:418)
>          at 
> kafka.log.LogManager.kafka$log$LogManager$$cleanupExpiredSegments(LogManager.scala:284)
>          at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:316)
>          at 
> kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:314)
>          at 
> scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:743)
>          at scala.collection.Iterator$class.foreach(Iterator.scala:772)
>          at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
>          at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
>          at 
> scala.collection.JavaConversions$JListWrapper.foreach(JavaConversions.scala:615)
>          at 
> scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:742)
>          at kafka.log.LogManager.cleanupLogs(LogManager.scala:314)
>          at 
> kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:143)
>          at kafka.utils.KafkaScheduler$$anon$1.run(KafkaScheduler.scala:100)
>          at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>          at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>          at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>          at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>          at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>          at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          at java.lang.Thread.run(Thread.java:724)
> I think this error happens because kafka tries to rename the log file when it 
> is still opened.  So we should close the file first before rename.
> The index file uses a special data structure, the MappedByteBuffer. Javadoc 
> describes it as:
> A mapped byte buffer and the file mapping that it represents remain valid 
> until the buffer itself is garbage-collected.
> Fortunately, I find a forceUnmap function in kafka code, and perhaps it can 
> be used to free the MappedByteBuffer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to