I have a simple HQL (below). In hive it takes maybe 10 minutes to complete.
When I do this with Spark it seems to take for every. The table is
partitioned by "datestamp".  I am using Spark 1.3.1
How can i tune/optimize 

here is the query 
tumblruser=hiveCtx.sql(" select s_mobile_id, receive_time  from 
mx3.post_tp_annotated_mb_impr where ad_id = 30590918987 and datestamp
>='201506230000' ")


Thanks
Ayman




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/HiveContext-Spark-much-slower-than-Hive-tp23480.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to