[jira] [Commented] (KAFKA-8172) FileSystemException: The process cannot access the file because it is being used by another process
[ https://issues.apache.org/jira/browse/KAFKA-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17077888#comment-17077888 ] Vincent Claeysen commented on KAFKA-8172: - 2.4 has the same problem. How to fix it ? > FileSystemException: The process cannot access the file because it is being > used by another process > --- > > Key: KAFKA-8172 > URL: https://issues.apache.org/jira/browse/KAFKA-8172 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 1.1.1, 2.2.0, 2.1.1 > Environment: Windows >Reporter: Bharat Kondeti >Priority: Major > Fix For: 1.1.1, 2.2.0, 2.1.1 > > Attachments: > 0001-Fix-to-close-the-handlers-before-renaming-files-and-.patch > > > Fix to close file handlers before renaming files / directories and open them > back if required > Following are the file renaming scenarios: > * Files are renamed to .deleted so they can be deleted > * .cleaned files are renamed to .swap as part of Log.replaceSegments flow > * .swap files are renamed to original files as part of Log.replaceSegments > flow > Following are the folder renaming scenarios: > * When a topic is marked for deletion, folder is renamed > * As part of replacing current logs with future logs in LogManager > In above scenarios, if file handles are not closed, we get file access > violation exception > Idea is to close the logs and file segments before doing a rename and open > them back up if required. > *Segments Deletion Scenario* > [2018-06-01 17:00:07,566] ERROR Error while deleting segments for test4-1 in > dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel) > java.nio.file.FileSystemException: > D:\data\Kafka\kafka-logs\test4-1\.log -> > D:\data\Kafka\kafka-logs\test4-1\.log.deleted: The > process cannot access the file because it is being used by another process. > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387) > at > sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) > at java.nio.file.Files.move(Files.java:1395) > at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697) > at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212) > at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415) > at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:1601) > at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:1588) > at > kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170) > at > kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170) > at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161) > at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161) > at kafka.log.Log.maybeHandleIOException(Log.scala:1678) > at kafka.log.Log.deleteSegments(Log.scala:1161) > at kafka.log.Log.deleteOldSegments(Log.scala:1156) > at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228) > at kafka.log.Log.deleteOldSegments(Log.scala:1222) > at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854) > at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852) > at > scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) > at scala.collection.immutable.List.foreach(List.scala:392) > at > scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) > at kafka.log.LogManager.cleanupLogs(LogManager.scala:852) > at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385) > at > kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) > at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecuto
[jira] [Commented] (KAFKA-8172) FileSystemException: The process cannot access the file because it is being used by another process
[ https://issues.apache.org/jira/browse/KAFKA-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16827447#comment-16827447 ] Dongjoon Hyun commented on KAFKA-8172: -- Can we remove `2.2.0` from the `Fix version` because 2.2.0 is already released? > FileSystemException: The process cannot access the file because it is being > used by another process > --- > > Key: KAFKA-8172 > URL: https://issues.apache.org/jira/browse/KAFKA-8172 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 1.1.1, 2.2.0, 2.1.1 > Environment: Windows >Reporter: Bharat Kondeti >Priority: Major > Fix For: 1.1.1, 2.2.0, 2.1.1 > > Attachments: > 0001-Fix-to-close-the-handlers-before-renaming-files-and-.patch > > > Fix to close file handlers before renaming files / directories and open them > back if required > Following are the file renaming scenarios: > * Files are renamed to .deleted so they can be deleted > * .cleaned files are renamed to .swap as part of Log.replaceSegments flow > * .swap files are renamed to original files as part of Log.replaceSegments > flow > Following are the folder renaming scenarios: > * When a topic is marked for deletion, folder is renamed > * As part of replacing current logs with future logs in LogManager > In above scenarios, if file handles are not closed, we get file access > violation exception > Idea is to close the logs and file segments before doing a rename and open > them back up if required. > *Segments Deletion Scenario* > [2018-06-01 17:00:07,566] ERROR Error while deleting segments for test4-1 in > dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel) > java.nio.file.FileSystemException: > D:\data\Kafka\kafka-logs\test4-1\.log -> > D:\data\Kafka\kafka-logs\test4-1\.log.deleted: The > process cannot access the file because it is being used by another process. > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387) > at > sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) > at java.nio.file.Files.move(Files.java:1395) > at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697) > at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212) > at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415) > at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:1601) > at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:1588) > at > kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170) > at > kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170) > at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161) > at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161) > at kafka.log.Log.maybeHandleIOException(Log.scala:1678) > at kafka.log.Log.deleteSegments(Log.scala:1161) > at kafka.log.Log.deleteOldSegments(Log.scala:1156) > at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228) > at kafka.log.Log.deleteOldSegments(Log.scala:1222) > at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854) > at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852) > at > scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) > at scala.collection.immutable.List.foreach(List.scala:392) > at > scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) > at kafka.log.LogManager.cleanupLogs(LogManager.scala:852) > at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385) > at > kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) > at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecu
[jira] [Commented] (KAFKA-8172) FileSystemException: The process cannot access the file because it is being used by another process
[ https://issues.apache.org/jira/browse/KAFKA-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804243#comment-16804243 ] Bharat Kondeti commented on KAFKA-8172: --- Pull request link > FileSystemException: The process cannot access the file because it is being > used by another process > --- > > Key: KAFKA-8172 > URL: https://issues.apache.org/jira/browse/KAFKA-8172 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 1.1.1, 2.2.0, 2.1.1 > Environment: Windows >Reporter: Bharat Kondeti >Priority: Major > Fix For: 1.1.1, 2.2.0, 2.1.1 > > Attachments: > 0001-Fix-to-close-the-handlers-before-renaming-files-and-.patch > > > Fix to close file handlers before renaming files / directories and open them > back if required > Following are the file renaming scenarios: > * Files are renamed to .deleted so they can be deleted > * .cleaned files are renamed to .swap as part of Log.replaceSegments flow > * .swap files are renamed to original files as part of Log.replaceSegments > flow > Following are the folder renaming scenarios: > * When a topic is marked for deletion, folder is renamed > * As part of replacing current logs with future logs in LogManager > In above scenarios, if file handles are not closed, we get file access > violation exception > Idea is to close the logs and file segments before doing a rename and open > them back up if required. > *Segments Deletion Scenario* > [2018-06-01 17:00:07,566] ERROR Error while deleting segments for test4-1 in > dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel) > java.nio.file.FileSystemException: > D:\data\Kafka\kafka-logs\test4-1\.log -> > D:\data\Kafka\kafka-logs\test4-1\.log.deleted: The > process cannot access the file because it is being used by another process. > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387) > at > sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) > at java.nio.file.Files.move(Files.java:1395) > at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697) > at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212) > at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415) > at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:1601) > at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:1588) > at > kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170) > at > kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170) > at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161) > at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161) > at kafka.log.Log.maybeHandleIOException(Log.scala:1678) > at kafka.log.Log.deleteSegments(Log.scala:1161) > at kafka.log.Log.deleteOldSegments(Log.scala:1156) > at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228) > at kafka.log.Log.deleteOldSegments(Log.scala:1222) > at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854) > at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852) > at > scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) > at scala.collection.immutable.List.foreach(List.scala:392) > at > scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) > at kafka.log.LogManager.cleanupLogs(LogManager.scala:852) > at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385) > at > kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) > at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang
[jira] [Commented] (KAFKA-8172) FileSystemException: The process cannot access the file because it is being used by another process
[ https://issues.apache.org/jira/browse/KAFKA-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16804235#comment-16804235 ] Bharat Kondeti commented on KAFKA-8172: --- Pull request > FileSystemException: The process cannot access the file because it is being > used by another process > --- > > Key: KAFKA-8172 > URL: https://issues.apache.org/jira/browse/KAFKA-8172 > Project: Kafka > Issue Type: Bug > Components: core >Affects Versions: 1.1.1, 2.2.0, 2.1.1 > Environment: Windows >Reporter: Bharat Kondeti >Priority: Major > Fix For: 1.1.1, 2.2.0, 2.1.1 > > > Fix to close file handlers before renaming files / directories and open them > back if required > Following are the file renaming scenarios: > * Files are renamed to .deleted so they can be deleted > * .cleaned files are renamed to .swap as part of Log.replaceSegments flow > * .swap files are renamed to original files as part of Log.replaceSegments > flow > Following are the folder renaming scenarios: > * When a topic is marked for deletion, folder is renamed > * As part of replacing current logs with future logs in LogManager > In above scenarios, if file handles are not closed, we get file access > violation exception > Idea is to close the logs and file segments before doing a rename and open > them back up if required. > *Segments Deletion Scenario* > [2018-06-01 17:00:07,566] ERROR Error while deleting segments for test4-1 in > dir D:\data\Kafka\kafka-logs (kafka.server.LogDirFailureChannel) > java.nio.file.FileSystemException: > D:\data\Kafka\kafka-logs\test4-1\.log -> > D:\data\Kafka\kafka-logs\test4-1\.log.deleted: The > process cannot access the file because it is being used by another process. > at > sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86) > at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97) > at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387) > at > sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287) > at java.nio.file.Files.move(Files.java:1395) > at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697) > at org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212) > at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415) > at kafka.log.Log.kafka$log$Log$$asyncDeleteSegment(Log.scala:1601) > at kafka.log.Log.kafka$log$Log$$deleteSegment(Log.scala:1588) > at > kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170) > at > kafka.log.Log$$anonfun$deleteSegments$1$$anonfun$apply$mcI$sp$1.apply(Log.scala:1170) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at kafka.log.Log$$anonfun$deleteSegments$1.apply$mcI$sp(Log.scala:1170) > at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161) > at kafka.log.Log$$anonfun$deleteSegments$1.apply(Log.scala:1161) > at kafka.log.Log.maybeHandleIOException(Log.scala:1678) > at kafka.log.Log.deleteSegments(Log.scala:1161) > at kafka.log.Log.deleteOldSegments(Log.scala:1156) > at kafka.log.Log.deleteRetentionMsBreachedSegments(Log.scala:1228) > at kafka.log.Log.deleteOldSegments(Log.scala:1222) > at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:854) > at kafka.log.LogManager$$anonfun$cleanupLogs$3.apply(LogManager.scala:852) > at > scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733) > at scala.collection.immutable.List.foreach(List.scala:392) > at > scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732) > at kafka.log.LogManager.cleanupLogs(LogManager.scala:852) > at kafka.log.LogManager$$anonfun$startup$1.apply$mcV$sp(LogManager.scala:385) > at > kafka.utils.KafkaScheduler$$anonfun$1.apply$mcV$sp(KafkaScheduler.scala:110) > at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:62) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Suppressed: java.nio.file.FileSystemException: > D:\data\Kafka\