Running Hive Beeline .hql file in Spark
Hi , Currently we are running Hive Beeline queries as below. *Beeline :-* beeline -u "jdbc:hive2://localhost:1/default;principal=hive/_HOST@ nsroot.net" --showHeader=false --silent=true --outputformat=dsv --verbose =false -f /home/*sample.hql *> output_partition.txt Note : We run the Hive queries in *sample.hql *and redirect the output in output file output_partition.txt *Spark:* Can anyone tell us how to implement this in *Spark sql* ( ie) Executing the hive.hql file and redirecting the output in one file. Regards Prasad
Running Hive Beeline .hql file in Spark
Hi , Currently we are running Hive Beeline queries as below. *Beeline :-* beeline -u "jdbc:hive2://localhost:1/default;principal=hive/_ h...@nsroot.net" --showHeader=false --silent=true --outputformat=dsv --verbose =false -f /home/*sample.hql *> output_partition.txt Note : We run the Hive queries in *sample.hql *and redirect the output in output file output_partition.txt *Spark:* Can anyone tell us how to implement this in *Spark sql* ( ie) Executing the hive.hql file and redirecting the output in one file. -- -- Regards, Prasad T
Re: Writing Spark SQL output in Local and HDFS path
Hi, I tried the below code, as result.write.csv(home/Prasad/) It is not working, It says Error: value csv is not member of org.apache.spark.sql.DataFrameWriter. Regards Prasad On Thu, Jan 19, 2017 at 4:35 PM, smartzjp <zjp_j...@163.com> wrote: > Beacause the reduce number will be not one, so it will out put a fold on > the HDFS, You can use “result.write.csv(foldPath)”. > > > > -- > > Hi, > Can anyone please let us know how to write the output of the Spark SQL > in > Local and HDFS path using Scala code. > > *Code :-* > > scala> val result = sqlContext.sql("select empno , name from emp"); > scala > result.show(); > > If I give the command result.show() then It will print the output in the > console. > I need to redirect the output in local file as well as HDFS file. > with the delimiter as "|". > > We tried with the below code > result.saveAsTextFile ("home/Prasad/result.txt") > It is not working as expected. > > > -- > -- > Prasad. T > -- -- Regards, RAVI PRASAD. T
Writing Spark SQL output in Local and HDFS path
Hi, Can anyone please let us know how to write the output of the Spark SQL in Local and HDFS path using Scala code. *Code :-* scala> val result = sqlContext.sql("select empno , name from emp"); scala > result.show(); If I give the command result.show() then It will print the output in the console. I need to redirect the output in local file as well as HDFS file. with the delimiter as "|". We tried with the below code result.saveAsTextFile ("home/Prasad/result.txt") It is not working as expected. -- -- Prasad. T
Which version of Hive support Spark Shark
Hi , Can any one please help me to understand which version of Hive support Spark and Shark -- -- Regards, RAVI PRASAD. T