[ 
https://issues.apache.org/jira/browse/SPARK-8000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15153570#comment-15153570
 ] 

Hyukjin Kwon commented on SPARK-8000:
-------------------------------------

And I got an email from you which was

{quote}

Thanks for the email. 

Don't make it that complicated. We just want to simplify the common cases (e.g. 
csv/parquet), and don't need this to work for everything out there.

{quote}

> SQLContext.read.load() should be able to auto-detect input data
> ---------------------------------------------------------------
>
>                 Key: SPARK-8000
>                 URL: https://issues.apache.org/jira/browse/SPARK-8000
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>            Reporter: Reynold Xin
>
> If it is a parquet file, use parquet. If it is a JSON file, use JSON. If it 
> is an ORC file, use ORC. If it is a CSV file, use CSV.
> Maybe Spark SQL can also write an output metadata file to specify the schema 
> & data source that's used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to