Well , Sorry for late reponse and thanks a lot for pointing out the clue.
fightf...@163.com
From: Akhil Das
Date: 2015-12-03 14:50
To: Sahil Sareen
CC: fightf...@163.com; user
Subject: Re: spark sql cli query results written to file ?
Oops 3 mins late. :)
Thanks
Best Regards
On Thu, Dec 3
Yeah, Thats the example from the link I just posted.
-Sahil
On Thu, Dec 3, 2015 at 11:41 AM, Akhil Das
wrote:
> Something like this?
>
> val df =
> sqlContext.read.load("examples/src/main/resources/users.parquet")df.select("name",
>
Something like this?
val df =
sqlContext.read.load("examples/src/main/resources/users.parquet")df.select("name",
"favorite_color").write.save("namesAndFavColors.parquet")
It will save the name, favorite_color columns to a parquet file. You can
read more information over here
Did you see: http://spark.apache.org/docs/latest/sql-programming-guide.html
-Sahil
On Thu, Dec 3, 2015 at 11:35 AM, fightf...@163.com
wrote:
> HI,
> How could I save the spark sql cli running queries results and write the
> results to some local file ?
> Is there any
Oops 3 mins late. :)
Thanks
Best Regards
On Thu, Dec 3, 2015 at 11:49 AM, Sahil Sareen wrote:
> Yeah, Thats the example from the link I just posted.
>
> -Sahil
>
> On Thu, Dec 3, 2015 at 11:41 AM, Akhil Das
> wrote:
>
>> Something like this?
>>
Thanks , will give it a try, appreciate your help
Regards,
Gaurav
On Sep 23, 2014 1:52 PM, Michael Armbrust mich...@databricks.com wrote:
A workaround for now would be to save the JSON as parquet and the create a
metastore parquet table. Using parquet will be much faster for repeated
You can't directly query JSON tables from the CLI or JDBC server since
temporary tables only live for the life of the Spark Context. This PR will
eventually (targeted for 1.2) let you do what you want in pure SQL:
https://github.com/apache/spark/pull/2475
On Mon, Sep 22, 2014 at 4:52 PM, Yin
A workaround for now would be to save the JSON as parquet and the create a
metastore parquet table. Using parquet will be much faster for repeated
querying. This function might be helpful:
import org.apache.spark.sql.hive.HiveMetastoreTypes
def createParquetTable(name: String, file: String,
Hi Gaurav,
Can you put hive-site.xml in conf/ and try again?
Thanks,
Yin
On Mon, Sep 22, 2014 at 4:02 PM, gtinside gtins...@gmail.com wrote:
Hi ,
I have been using spark shell to execute all SQLs. I am connecting to
Cassandra , converting the data in JSON and then running queries on it,