[jira] [Commented] (SPARK-15245) stream API throws an exception with an incorrect message when the path is not a direcotry

2016-05-10 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277984#comment-15277984
 ] 

Sean Owen commented on SPARK-15245:
---

Ultimately, the lower levels raise the correct error in this case. I see that 
it refers to a "basePath" argument, yes, but would it confuse a user?

> stream API throws an exception with an incorrect message when the path is not 
> a direcotry
> -
>
> Key: SPARK-15245
> URL: https://issues.apache.org/jira/browse/SPARK-15245
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Hyukjin Kwon
>Priority: Trivial
>
> {code}
> val path = "tmp.csv" // This is not a directory
> val cars = spark.read
>   .format("csv")
>   .stream(path)
>   .write
>   .option("checkpointLocation", "streaming.metadata")
>   .startStream("tmp")
> {code}
> This throws an exception as below.
> {code}
> java.lang.IllegalArgumentException: Option 'basePath' must be a directory
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117)
>   at 
> org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310)
>   at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> {code}
> It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be 
> great if it has a better message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15245) stream API throws an exception with an incorrect message when the path is not a direcotry

2016-05-10 Thread Hyukjin Kwon (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277979#comment-15277979
 ] 

Hyukjin Kwon commented on SPARK-15245:
--

Sorry for leaving comments again and again but I think this JIRA might not have 
to be closed (my PR was closed though) because there is a kind of hidden option 
{{basePath}} for reading partitioned table for datasources, 
[here|https://github.com/apache/spark/blob/f7b7ef41662d7d02fc4f834f3c6c4ee8802e949c/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/PartitioningAwareFileCatalog.scala#L170-L173]
 and {{stream()}} API is overwriting this, 
[here|https://github.com/apache/spark/blob/f7b7ef41662d7d02fc4f834f3c6c4ee8802e949c/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSource.scala#L183].
 So, I feel like the message has to be corrected anyway..

> stream API throws an exception with an incorrect message when the path is not 
> a direcotry
> -
>
> Key: SPARK-15245
> URL: https://issues.apache.org/jira/browse/SPARK-15245
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Hyukjin Kwon
>Priority: Trivial
>
> {code}
> val path = "tmp.csv" // This is not a directory
> val cars = spark.read
>   .format("csv")
>   .stream(path)
>   .write
>   .option("checkpointLocation", "streaming.metadata")
>   .startStream("tmp")
> {code}
> This throws an exception as below.
> {code}
> java.lang.IllegalArgumentException: Option 'basePath' must be a directory
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117)
>   at 
> org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310)
>   at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> {code}
> It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be 
> great if it has a better message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15245) stream API throws an exception with an incorrect message when the path is not a direcotry

2016-05-09 Thread Hyukjin Kwon (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277716#comment-15277716
 ] 

Hyukjin Kwon commented on SPARK-15245:
--

(BTW, as you might already know, the reason why I thought the message is wrong 
was it should be {{path}} not {{basePath}} because {{path}} option is exposed 
to users with {{stream()}} API. (e.g. {{option("path", path).stream()}}))

> stream API throws an exception with an incorrect message when the path is not 
> a direcotry
> -
>
> Key: SPARK-15245
> URL: https://issues.apache.org/jira/browse/SPARK-15245
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Hyukjin Kwon
>Priority: Trivial
>
> {code}
> val path = "tmp.csv" // This is not a directory
> val cars = spark.read
>   .format("csv")
>   .stream(path)
>   .write
>   .option("checkpointLocation", "streaming.metadata")
>   .startStream("tmp")
> {code}
> This throws an exception as below.
> {code}
> java.lang.IllegalArgumentException: Option 'basePath' must be a directory
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117)
>   at 
> org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310)
>   at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> {code}
> It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be 
> great if it has a better message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15245) stream API throws an exception with an incorrect message when the path is not a direcotry

2016-05-09 Thread Hyukjin Kwon (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277700#comment-15277700
 ] 

Hyukjin Kwon commented on SPARK-15245:
--

Thank you so much. Let me close my PR.

> stream API throws an exception with an incorrect message when the path is not 
> a direcotry
> -
>
> Key: SPARK-15245
> URL: https://issues.apache.org/jira/browse/SPARK-15245
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Hyukjin Kwon
>Priority: Trivial
>
> {code}
> val path = "tmp.csv" // This is not a directory
> val cars = spark.read
>   .format("csv")
>   .stream(path)
>   .write
>   .option("checkpointLocation", "streaming.metadata")
>   .startStream("tmp")
> {code}
> This throws an exception as below.
> {code}
> java.lang.IllegalArgumentException: Option 'basePath' must be a directory
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117)
>   at 
> org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310)
>   at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> {code}
> It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be 
> great if it has a better message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15245) stream API throws an exception with an incorrect message when the path is not a direcotry

2016-05-09 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277697#comment-15277697
 ] 

Sean Owen commented on SPARK-15245:
---

The message seems correct. I agree it could possibly be checked earlier though, 
yes. I also agree it's not clear if it's worth a little extra code and extra 
calls to check it, since it's a rare failure mode and handled quickly and 
correctly anyway. It doesn't affect normal usage.

> stream API throws an exception with an incorrect message when the path is not 
> a direcotry
> -
>
> Key: SPARK-15245
> URL: https://issues.apache.org/jira/browse/SPARK-15245
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Hyukjin Kwon
>Priority: Trivial
>
> {code}
> val path = "tmp.csv" // This is not a directory
> val cars = spark.read
>   .format("csv")
>   .stream(path)
>   .write
>   .option("checkpointLocation", "streaming.metadata")
>   .startStream("tmp")
> {code}
> This throws an exception as below.
> {code}
> java.lang.IllegalArgumentException: Option 'basePath' must be a directory
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117)
>   at 
> org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310)
>   at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> {code}
> It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be 
> great if it has a better message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15245) stream API throws an exception with an incorrect message when the path is not a direcotry

2016-05-09 Thread Hyukjin Kwon (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277696#comment-15277696
 ] 

Hyukjin Kwon commented on SPARK-15245:
--

Oh, the main reason is, I thought {{'basePath' must be a directory}} is not a 
correct message. 

Also, I realised this can be checked earlier in driver-side. It seems the 
exception is raised not in driver-side.

So, diver-side seems not catching it. Above codes return 0 but print out the 
exception message in my local test.


So, I thought anyway this can be caught earlier with a better message.

But after thinking more, I started to get worried that opening the given paths 
might be overhead... especially for S3.. 

I am not sure of this one. I can close if we think it is not appropriate.

> stream API throws an exception with an incorrect message when the path is not 
> a direcotry
> -
>
> Key: SPARK-15245
> URL: https://issues.apache.org/jira/browse/SPARK-15245
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Hyukjin Kwon
>Priority: Trivial
>
> {code}
> val path = "tmp.csv" // This is not a directory
> val cars = spark.read
>   .format("csv")
>   .stream(path)
>   .write
>   .option("checkpointLocation", "streaming.metadata")
>   .startStream("tmp")
> {code}
> This throws an exception as below.
> {code}
> java.lang.IllegalArgumentException: Option 'basePath' must be a directory
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117)
>   at 
> org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310)
>   at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> {code}
> It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be 
> great if it has a better message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15245) stream API throws an exception with an incorrect message when the path is not a direcotry

2016-05-09 Thread Sean Owen (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277690#comment-15277690
 ] 

Sean Owen commented on SPARK-15245:
---

Hm, what's the issue here? it says the arg was not a directory, which is the 
problem.

> stream API throws an exception with an incorrect message when the path is not 
> a direcotry
> -
>
> Key: SPARK-15245
> URL: https://issues.apache.org/jira/browse/SPARK-15245
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Hyukjin Kwon
>Priority: Trivial
>
> {code}
> val path = "tmp.csv" // This is not a directory
> val cars = spark.read
>   .format("csv")
>   .stream(path)
>   .write
>   .option("checkpointLocation", "streaming.metadata")
>   .startStream("tmp")
> {code}
> This throws an exception as below.
> {code}
> java.lang.IllegalArgumentException: Option 'basePath' must be a directory
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117)
>   at 
> org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310)
>   at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> {code}
> It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be 
> great if it has a better message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Commented] (SPARK-15245) stream API throws an exception with an incorrect message when the path is not a direcotry

2016-05-09 Thread Apache Spark (JIRA)

[ 
https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15277647#comment-15277647
 ] 

Apache Spark commented on SPARK-15245:
--

User 'HyukjinKwon' has created a pull request for this issue:
https://github.com/apache/spark/pull/13021

> stream API throws an exception with an incorrect message when the path is not 
> a direcotry
> -
>
> Key: SPARK-15245
> URL: https://issues.apache.org/jira/browse/SPARK-15245
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Hyukjin Kwon
>Priority: Trivial
>
> {code}
> val path = "tmp.csv" // This is not a directory
> val cars = spark.read
>   .format("csv")
>   .stream(path)
>   .write
>   .option("checkpointLocation", "streaming.metadata")
>   .startStream("tmp")
> {code}
> This throws an exception as below.
> {code}
> java.lang.IllegalArgumentException: Option 'basePath' must be a directory
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117)
>   at 
> org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310)
>   at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> {code}
> It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be 
> great if it has a better message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org