foreachAsync at RemoteHiveSparkClient.java

2014-12-18 Thread yuemeng1
hi,all i execute a sql on hive on spark,the comand like: select distinct st.sno,sname from student st join score sc on(st.sno=sc.sno) where sc.cno IN(11,12,13) and st.sage 28;(some days ago this sql can work ) but it give me some Info in hive shell : Query Hive on Spark job[0] stages: 0

can i get access to hive wiki

2014-12-12 Thread yuemeng1
hi,all i will write a wiki docs about how to install and deploy hive on spark,and hope to get access to hive wiki,and my Confluence usename is yuemeng1.

Re: Job aborted due to stage failure

2014-12-04 Thread yuemeng1
, please forgive about the lack of proper documentation. We have a Get Started page that's linked in HIVE-7292. If you can improve the document there, it would be very helpful for other Hive users. Thanks, Xuefu On Wed, Dec 3, 2014 at 5:42 PM, yuemeng1 yueme...@huawei.com

Re: Job aborted due to stage failure

2014-12-03 Thread yuemeng1
as well as -Pyarn. When you run hive queries, you may need to run set spark.home=/path/to/spark/dir; Thanks, Xuefu On Tue, Dec 2, 2014 at 6:29 PM, yuemeng1 yueme...@huawei.com mailto:yueme...@huawei.com wrote: hi,XueFu,thanks a lot for your help,now i will provide more detail

.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

2014-12-02 Thread yuemeng1
hi,i use git checkout a spark branch of hive today,and i think it's the latest version.and it's pom.xml file include: commons-dbcp.version1.4/commons-dbcp.version derby.version10.11.1.1/derby.version after i built ,a derby-10.11.1.1.jar in apache-hive-0.15.0-SNAPSHOT-bin/lib ,but when i

Re: Job aborted due to stage failure

2014-12-02 Thread yuemeng1
assembly from spark 1.2 branch. this should give your both a spark build as well as spark-assembly jar, which you need to copy to Hive lib directory. Snapshot is fine, and spark 1.2 hasn't been released yet. --Xuefu On Mon, Dec 1, 2014 at 7:41 PM, yuemeng1 yueme...@huawei.com mailto:yueme

Re: Job aborted due to stage failure

2014-12-02 Thread yuemeng1
released yet. --Xuefu On Mon, Dec 1, 2014 at 7:41 PM, yuemeng1 yueme...@huawei.com mailto:yueme...@huawei.com wrote: hi.XueFu, thanks a lot for your inforamtion,but as far as i know ,the latest spark version on github is spark-snapshot-1.3,but there is no spark-1.2,only have

Re: Job aborted due to stage failure

2014-12-02 Thread yuemeng1
spark branch, the command to build Spark, how you build Hive, and what queries/commands you run. We are running Hive on Spark all the time. Our pre-commit test runs without any issue. Thanks, Xuefu On Tue, Dec 2, 2014 at 4:13 AM, yuemeng1 yueme...@huawei.com mailto:yueme...@huawei.com wrote

Exception in thread main java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient

2014-12-02 Thread yuemeng1
i checkout a spark branch of hive today,and built with commad: mvn clean package -DskipTests -Phadoop-2 -Pdist after i run commad in hive bin: /opt/hispark/apache-hive-0.15.0-SNAPSHOT-bin/bin # ./hive --auxpath /opt/hispark/spark/assembly/target/scala-2.10/spark-assembly-1.2.0-hadoop2.4.0.jar it

Job aborted due to stage failure

2014-12-01 Thread yuemeng1
hi,i built a hive on spark package and my spark assembly jar is spark-assembly-1.2.0-SNAPSHOT-hadoop2.4.0.jar,when i run a query in hive shell,before execute this query, i set all the require which hive need with spark.and i execute a join query : select distinct st.sno,sname from student st

Re: Job aborted due to stage failure

2014-12-01 Thread yuemeng1
, Dec 1, 2014 at 6:22 PM, yuemeng1 yueme...@huawei.com mailto:yueme...@huawei.com wrote: hi,i built a hive on spark package and my spark assembly jar is spark-assembly-1.2.0-SNAPSHOT-hadoop2.4.0.jar,when i run a query in hive shell,before execute this query, i set all the require

java.lang.NoClassDefFoundError: org/apache/spark/SparkJobInfo

2014-12-01 Thread yuemeng1
i get a spark-1.1.0-bin-hadoop2.4 from(http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/) and replace the Spark 1.2.x assembly with http://ec2-50-18-79-139.us-west-1.compute.amazonaws.com/data/spark-assembly-1.2.0-SNAPSHOT-hadoop2.3.0-cdh5.1.2.jar,but when i run a query about