[jira] [Assigned] (SPARK-15245) stream API throws an exception with an incorrect message when the path is not a direcotry

2016-05-09 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-15245:


Assignee: Apache Spark

> stream API throws an exception with an incorrect message when the path is not 
> a direcotry
> -
>
> Key: SPARK-15245
> URL: https://issues.apache.org/jira/browse/SPARK-15245
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Hyukjin Kwon
>Assignee: Apache Spark
>Priority: Trivial
>
> {code}
> val path = "tmp.csv" // This is not a directory
> val cars = spark.read
>   .format("csv")
>   .stream(path)
>   .write
>   .option("checkpointLocation", "streaming.metadata")
>   .startStream("tmp")
> {code}
> This throws an exception as below.
> {code}
> java.lang.IllegalArgumentException: Option 'basePath' must be a directory
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117)
>   at 
> org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310)
>   at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> {code}
> It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be 
> great if it has a better message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Assigned] (SPARK-15245) stream API throws an exception with an incorrect message when the path is not a direcotry

2016-05-09 Thread Apache Spark (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Apache Spark reassigned SPARK-15245:


Assignee: (was: Apache Spark)

> stream API throws an exception with an incorrect message when the path is not 
> a direcotry
> -
>
> Key: SPARK-15245
> URL: https://issues.apache.org/jira/browse/SPARK-15245
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Reporter: Hyukjin Kwon
>Priority: Trivial
>
> {code}
> val path = "tmp.csv" // This is not a directory
> val cars = spark.read
>   .format("csv")
>   .stream(path)
>   .write
>   .option("checkpointLocation", "streaming.metadata")
>   .startStream("tmp")
> {code}
> This throws an exception as below.
> {code}
> java.lang.IllegalArgumentException: Option 'basePath' must be a directory
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.basePaths(PartitioningAwareFileCatalog.scala:180)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.inferPartitioning(PartitioningAwareFileCatalog.scala:117)
>   at 
> org.apache.spark.sql.execution.datasources.ListingFileCatalog.partitionSpec(ListingFileCatalog.scala:54)
>   at 
> org.apache.spark.sql.execution.datasources.PartitioningAwareFileCatalog.allFiles(PartitioningAwareFileCatalog.scala:65)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:350)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.org$apache$spark$sql$execution$datasources$DataSource$$dataFrameBuilder$1(DataSource.scala:197)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource$$anonfun$createSource$1.apply(DataSource.scala:201)
>   at 
> org.apache.spark.sql.execution.streaming.FileStreamSource.getBatch(FileStreamSource.scala:101)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:313)
>   at 
> org.apache.spark.sql.execution.streaming.StreamExecution$$anonfun$5.apply(StreamExecution.scala:310)
>   at 
> scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
> {code}
> It seems {{path}} is set to {{basePath}} in {{DataSource}}. This might be 
> great if it has a better message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org