[ 
https://issues.apache.org/jira/browse/HIVE-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14364950#comment-14364950
 ] 

Amithsha commented on HIVE-9970:
--------------------------------

HIVE LOG (hive.log)

2015-03-17 16:36:52,654 INFO  [main]: ql.Driver 
(SessionState.java:printInfo(852)) - Launching Job 1 out of 1
2015-03-17 16:36:52,656 INFO  [main]: ql.Driver (Driver.java:launchTask(1630)) 
- Starting task [Stage-1:MAPRED] in parallel
2015-03-17 16:36:52,660 INFO  [Thread-68]: hive.metastore 
(HiveMetaStoreClient.java:open(365)) - Trying to connect to metastore with URI 
thrift://nn01:7099
2015-03-17 16:36:52,665 INFO  [Thread-68]: hive.metastore 
(HiveMetaStoreClient.java:open(461)) - Connected to metastore.
2015-03-17 16:36:52,688 INFO  [Thread-68]: session.SessionState 
(SessionState.java:start(488)) - No Tez session required at this point. 
hive.execution.engine=mr.
2015-03-17 16:36:52,689 INFO  [Thread-68]: exec.Task 
(SessionState.java:printInfo(852)) - In order to change the average load for a 
reducer (in bytes):
2015-03-17 16:36:52,689 INFO  [Thread-68]: exec.Task 
(SessionState.java:printInfo(852)) -   set 
hive.exec.reducers.bytes.per.reducer=<number>
2015-03-17 16:36:52,689 INFO  [Thread-68]: exec.Task 
(SessionState.java:printInfo(852)) - In order to limit the maximum number of 
reducers:
2015-03-17 16:36:52,689 INFO  [Thread-68]: exec.Task 
(SessionState.java:printInfo(852)) -   set hive.exec.reducers.max=<number>
2015-03-17 16:36:52,689 INFO  [Thread-68]: exec.Task 
(SessionState.java:printInfo(852)) - In order to set a constant number of 
reducers:
2015-03-17 16:36:52,689 INFO  [Thread-68]: exec.Task 
(SessionState.java:printInfo(852)) -   set mapreduce.job.reduces=<number>
2015-03-17 16:36:52,696 INFO  [Thread-68]: spark.HiveSparkClientFactory 
(HiveSparkClientFactory.java:initiateSparkConf(130)) - load RPC property from 
hive configuration (hive.spark.client.connect.timeout -> 1000).
2015-03-17 16:36:52,697 INFO  [Thread-68]: spark.HiveSparkClientFactory 
(HiveSparkClientFactory.java:initiateSparkConf(113)) - load spark property from 
hive configuration (spark.eventLog.enabled -> false).
2015-03-17 16:36:52,697 INFO  [Thread-68]: spark.HiveSparkClientFactory 
(HiveSparkClientFactory.java:initiateSparkConf(130)) - load RPC property from 
hive configuration (hive.spark.client.rpc.threads -> 8).
2015-03-17 16:36:52,698 INFO  [Thread-68]: spark.HiveSparkClientFactory 
(HiveSparkClientFactory.java:initiateSparkConf(130)) - load RPC property from 
hive configuration (hive.spark.client.secret.bits -> 256).
2015-03-17 16:36:52,699 INFO  [Thread-68]: spark.HiveSparkClientFactory 
(HiveSparkClientFactory.java:initiateSparkConf(130)) - load RPC property from 
hive configuration (hive.spark.client.rpc.max.size -> 52428800).
2015-03-17 16:36:52,699 INFO  [Thread-68]: spark.HiveSparkClientFactory 
(HiveSparkClientFactory.java:initiateSparkConf(113)) - load spark property from 
hive configuration (spark.master -> spark://10.10.10.25:7077).
2015-03-17 16:36:52,702 INFO  [Thread-68]: spark.HiveSparkClientFactory 
(HiveSparkClientFactory.java:initiateSparkConf(130)) - load RPC property from 
hive configuration (hive.spark.client.server.connect.timeout -> 90000).
2015-03-17 16:36:54,480 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) - Spark assembly has been built with Hive, 
including Datanucleus jars on classpath
2015-03-17 16:36:57,761 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) - Warning: Ignoring non-spark config property: 
hive.spark.client.connect.timeout=1000
2015-03-17 16:36:57,761 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) - Warning: Ignoring non-spark config property: 
hive.spark.client.rpc.threads=8
2015-03-17 16:36:57,762 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) - Warning: Ignoring non-spark config property: 
hive.spark.client.rpc.max.size=52428800
2015-03-17 16:36:57,763 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) - Warning: Ignoring non-spark config property: 
hive.spark.client.secret.bits=256
2015-03-17 16:36:57,763 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) - Warning: Ignoring non-spark config property: 
hive.spark.client.server.connect.timeout=90000
2015-03-17 16:36:58,224 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) - 15/03/17 16:36:58 INFO client.RemoteDriver: 
Connecting to: nn01:50661
2015-03-17 16:36:58,240 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) - Exception in thread "main" 
java.lang.NoSuchFieldError: SPARK_RPC_CLIENT_CONNECT_TIMEOUT
2015-03-17 16:36:58,241 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
org.apache.hive.spark.client.rpc.RpcConfiguration.<clinit>(RpcConfiguration.java:46)
2015-03-17 16:36:58,241 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:137)
2015-03-17 16:36:58,241 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:528)
2015-03-17 16:36:58,242 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2015-03-17 16:36:58,242 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
2015-03-17 16:36:58,242 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2015-03-17 16:36:58,243 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
java.lang.reflect.Method.invoke(Method.java:606)
2015-03-17 16:36:58,243 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
2015-03-17 16:36:58,243 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)
2015-03-17 16:36:58,243 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
2015-03-17 16:36:58,244 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
2015-03-17 16:36:58,244 INFO  [stderr-redir-1]: client.SparkClientImpl 
(SparkClientImpl.java:run(530)) -        at 
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2015-03-17 16:36:58,858 WARN  [Driver]: client.SparkClientImpl 
(SparkClientImpl.java:run(388)) - Child process exited with code 1.


> Hive on spark
> -------------
>
>                 Key: HIVE-9970
>                 URL: https://issues.apache.org/jira/browse/HIVE-9970
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Amithsha
>
> Hi all,
> Recently i have configured Spark 1.2.0 and my environment is hadoop
> 2.6.0 hive 1.1.0 Here i have tried hive on Spark while executing
> insert into i am getting the following g error.
> Query ID = hadoop2_20150313162828_8764adad-a8e4-49da-9ef5-35e4ebd6bc63
> Total jobs = 1
> Launching Job 1 out of 1
> In order to change the average load for a reducer (in bytes):
> set hive.exec.reducers.bytes.per.reducer=<number>
> In order to limit the maximum number of reducers:
> set hive.exec.reducers.max=<number>
> In order to set a constant number of reducers:
> set mapreduce.job.reduces=<number>
> Failed to execute spark task, with exception
> 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create
> spark client.)'
> FAILED: Execution Error, return code 1 from
> org.apache.hadoop.hive.ql.exec.spark.SparkTask
> Have added the spark-assembly jar in hive lib
> And also in hive console using the command add jar followed by the steps
> set spark.home=/opt/spark-1.2.1/;
> add jar 
> /opt/spark-1.2.1/assembly/target/scala-2.10/spark-assembly-1.2.1-hadoop2.4.0.jar;
> set hive.execution.engine=spark;
> set spark.master=spark://xxxxxxx:7077;
> set spark.eventLog.enabled=true;
> set spark.executor.memory=512m;
> set spark.serializer=org.apache.spark.serializer.KryoSerializer;
> Can anyone suggest!!!!
> Thanks & Regards
> Amithsha



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to