[ 
https://issues.apache.org/jira/browse/SPARK-3720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14377155#comment-14377155
 ] 

iward commented on SPARK-3720:
------------------------------

[~zhanzhang], I see. since the patch is delayed, so we can't use orcFile API in 
spark currently. But, the problem of reading whole files is urgent, do we have 
other way to solve this in spark ?

> support ORC in spark sql
> ------------------------
>
>                 Key: SPARK-3720
>                 URL: https://issues.apache.org/jira/browse/SPARK-3720
>             Project: Spark
>          Issue Type: New Feature
>          Components: SQL
>    Affects Versions: 1.1.0
>            Reporter: Fei Wang
>         Attachments: orc.diff
>
>
> The Optimized Row Columnar (ORC) file format provides a highly efficient way 
> to store data on hdfs.ORC file format has many advantages such as:
> 1 a single file as the output of each task, which reduces the NameNode's load
> 2 Hive type support including datetime, decimal, and the complex types 
> (struct, list, map, and union)
> 3 light-weight indexes stored within the file
> skip row groups that don't pass predicate filtering
> seek to a given row
> 4 block-mode compression based on data type
> run-length encoding for integer columns
> dictionary encoding for string columns
> 5 concurrent reads of the same file using separate RecordReaders
> 6 ability to split files without scanning for markers
> 7 bound the amount of memory needed for reading or writing
> 8 metadata stored using Protocol Buffers, which allows addition and removal 
> of fields
> Now spark sql support Parquet, support ORC provide people more opts.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to