HeartSaVioR commented on a change in pull request #31638:
URL: https://github.com/apache/spark/pull/31638#discussion_r602649700



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/streaming/FileStreamSink.scala
##########
@@ -43,12 +44,23 @@ object FileStreamSink extends Logging {
     path match {
       case Seq(singlePath) =>
         val hdfsPath = new Path(singlePath)
-        val fs = hdfsPath.getFileSystem(hadoopConf)
-        if (fs.isDirectory(hdfsPath)) {
-          val metadataPath = getMetadataLogPath(fs, hdfsPath, sqlConf)
-          fs.exists(metadataPath)
-        } else {
-          false
+        if (SparkHadoopUtil.get.isGlobPath(hdfsPath)) {

Review comment:
       The expectation on user side is important here; they clearly specify 
that the input path is a "glob path". It seems easier to reason about the 
behavior we simply ignore metadata if the input path is a glob path. 
   I'm not clear where it is broken here, but I think there could be some sort 
of arguments that file index should be MetadataLogFileIndex. For me, listing 
files even for this case is consistent with the cases where glob path matches 
multiple directories.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to