Hi Deepak and all,
write() is a function of DataFrame, please check
https://spark.apache.org/docs/1.5.1/api/java/org/apache/spark/sql/DataFrame.html
,
the last one is write()
The problem is write to HDFS is successful but
esults.write.format("orc").save("yahoo_stocks_orc")
has empty folder.

Could anyone help?

Regards,
Sai

On Mon, Nov 16, 2015 at 8:00 PM Deepak Sharma <deepakmc...@gmail.com> wrote:

> Sai,
> I am bit confused here.
> How are you using write with results?
> I am using spark 1.4.1 and when i use write , it complains about write not
> being member of DataFrame.
> error:value write is not a member of org.apache.spark.sql.DataFrame
>
> Thanks
> Deepak
>
> On Mon, Nov 16, 2015 at 4:10 PM, 张炜 <zhangwei...@gmail.com> wrote:
>
>> Dear all,
>> I am following this article to try Hive on Spark
>>
>> http://hortonworks.com/hadoop-tutorial/using-hive-with-orc-from-apache-spark/
>>
>> My environment:
>> Hive 1.2.1
>> Spark 1.5.1
>>
>> in a nutshell, I ran spark-shell, created a hive table
>>
>> hiveContext.sql("create table yahoo_orc_table (date STRING, open_price
>> FLOAT, high_price FLOAT, low_price FLOAT, close_price FLOAT, volume INT,
>> adj_price FLOAT) stored as orc")
>>
>> I also computed a dataframe and can show correct contents.
>> val results = sqlContext.sql("SELECT * FROM yahoo_stocks_temp")
>>
>> Then I executed the save command
>> results.write.format("orc").save("yahoo_stocks_orc")
>>
>> I can see a folder named "yahoo_stocks_orc" got successfully and there is
>> a _SUCCESS file inside it, but no orc file at all. I repeated this for many
>> times and it's the same result.
>>
>> But
>> results.write.format("orc").save("hdfs://*****:8020/yahoo_stocks_orc")
>> can successfully write contents.
>>
>> Please kindly help.
>>
>> Regards,
>> Sai
>>
>>
>
>
> --
> Thanks
> Deepak
> www.bigdatabig.com
> www.keosha.net
>

Reply via email to