[jira] [Updated] (SPARK-8000) SQLContext.read.load() should be able to auto-detect input data

2018-10-10 Thread Sean Owen (JIRA)


 [ 
https://issues.apache.org/jira/browse/SPARK-8000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Owen updated SPARK-8000:
-
Affects Version/s: 2.0.0
   Issue Type: Improvement  (was: Sub-task)
   Parent: (was: SPARK-9576)

> SQLContext.read.load() should be able to auto-detect input data
> ---
>
> Key: SPARK-8000
> URL: https://issues.apache.org/jira/browse/SPARK-8000
> Project: Spark
>  Issue Type: Improvement
>  Components: SQL
>Affects Versions: 2.0.0
>Reporter: Reynold Xin
>Priority: Major
>
> If it is a parquet file, use parquet. If it is a JSON file, use JSON. If it 
> is an ORC file, use ORC. If it is a CSV file, use CSV.
> Maybe Spark SQL can also write an output metadata file to specify the schema 
> & data source that's used.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org



[jira] [Updated] (SPARK-8000) SQLContext.read.load() should be able to auto-detect input data

2015-08-03 Thread Reynold Xin (JIRA)

 [ 
https://issues.apache.org/jira/browse/SPARK-8000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reynold Xin updated SPARK-8000:
---
Parent Issue: SPARK-9576  (was: SPARK-6116)

 SQLContext.read.load() should be able to auto-detect input data
 ---

 Key: SPARK-8000
 URL: https://issues.apache.org/jira/browse/SPARK-8000
 Project: Spark
  Issue Type: Sub-task
  Components: SQL
Reporter: Reynold Xin

 If it is a parquet file, use parquet. If it is a JSON file, use JSON. If it 
 is an ORC file, use ORC. If it is a CSV file, use CSV.
 Maybe Spark SQL can also write an output metadata file to specify the schema 
  data source that's used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org