Github user marmbrus commented on a diff in the pull request:

    https://github.com/apache/spark/pull/11509#discussion_r55075899
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/ResolvedDataSource.scala
 ---
    @@ -92,19 +96,61 @@ object ResolvedDataSource extends Logging {
         }
       }
     
    +  // TODO: Combine with apply?
       def createSource(
           sqlContext: SQLContext,
           userSpecifiedSchema: Option[StructType],
           providerName: String,
           options: Map[String, String]): Source = {
         val provider = lookupDataSource(providerName).newInstance() match {
    -      case s: StreamSourceProvider => s
    +      case s: StreamSourceProvider =>
    +        s.createSource(sqlContext, userSpecifiedSchema, providerName, 
options)
    +
    +      case format: FileFormat =>
    +        val caseInsensitiveOptions = new CaseInsensitiveMap(options)
    +        val path = caseInsensitiveOptions.getOrElse("path", {
    +          throw new IllegalArgumentException("'path' is not specified")
    +        })
    +        val metadataPath = 
caseInsensitiveOptions.getOrElse("metadataPath", s"$path/_metadata")
    +
    +        val allPaths = caseInsensitiveOptions.get("path")
    +        val globbedPaths = allPaths.toSeq.flatMap { path =>
    --- End diff --
    
    Good catch.  I think the next step here is to actually try and combine the 
logic for streaming and batch file sources as the TODO above says.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to