I agree. We need the support similar to parquet file for end user. That’s the 
purpose of Spark-2883.

Thanks.

Zhan Zhang

On Aug 14, 2014, at 11:42 AM, Yin Huai <huaiyin....@gmail.com> wrote:

> I feel that using hadoopFile and saveAsHadoopFile to read and write ORCFile 
> are more towards developers because read/write options have to be manually 
> populated. Seems those new APIs were added by 
> https://issues.apache.org/jira/browse/HIVE-5728.
> 
> For using ORCOutputFormat (the old one) with saveAsHadoopFile, I am not sure 
> it can work properly. Because getRecordWriter will be called, the ORCFile is 
> probably created in a wrong path 
> (https://github.com/apache/hive/blob/trunk/ql/src/java/org/apache/hadoop/hive/ql/io/orc/OrcOutputFormat.java#L181).
> 


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Reply via email to