steveloughran commented on a change in pull request #22952: [SPARK-20568][SS] 
Provide option to clean up completed files in streaming query
URL: https://github.com/apache/spark/pull/22952#discussion_r248249647
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSource.scala
 ##########
 @@ -257,16 +289,65 @@ class FileStreamSource(
    * equal to `end` and will only request offsets greater than `end` in the 
future.
    */
   override def commit(end: Offset): Unit = {
-    // No-op for now; FileStreamSource currently garbage-collects files based 
on timestamp
-    // and the value of the maxFileAge parameter.
+    def move(entry: FileEntry, baseArchiveDirPath: String): Unit = {
+      val curPath = new Path(entry.path)
+      val curPathUri = curPath.toUri
+
+      val newPath = new Path(baseArchiveDirPath + curPathUri.getPath)
+      try {
+        logDebug(s"Creating directory if it doesn't exist 
${newPath.getParent}")
+        if (!fs.exists(newPath.getParent)) {
+          fs.mkdirs(newPath.getParent)
+        }
+
+        logDebug(s"Archiving completed file $curPath to $newPath")
+        fs.rename(curPath, newPath)
 
 Review comment:
   rename()'s return of true/false is pretty meaningless, as in "says that it 
fails, but doesn't provide any explanation as to why". See 
[HADOOP-11452](https://issues.apache.org/jira/browse/HADOOP-11452) for 
discussion on making rename/3 public -this does through useful exceptions on 
failures. Happy for anyone to take up work on that...

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to