[jira] [Updated] (HIVE-18655) Apache hive 2.1.1 on Apache Spark 2.0

2018-02-08 Thread AbdulMateen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AbdulMateen updated HIVE-18655:
---
Description: 
 
|Hi,
  
 when connecting my beeline in hive it is not able to create spark client.
  
 {{select count(*) from student; Query ID = 
hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 
Launching Job 1 out of 1 In order to change the average load for a reducer (in 
bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the 
maximum number of reducers: set hive.exec.reducers.max= In order to set 
a constant number of reducers: set mapreduce.job.reduces= }}
 Failed to execute spark task, with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(*Failed to create spark 
client*.)'
 { {FAILED: Execution Error, return code 1 from 
{color:#FF}org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error 
while processing statement: FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1) }{color}
}
  
  
 Installed spark prebuilt 2.0 one in standalone cluster mode
  
 My hive-site.xml – placed in spark/conf too and removed the hive jars in hdfs 
path
  
 {{ spark.master yarn 
 Spark Master URL   
spark.eventLog.enabled true Spark 
Event Log   spark.eventLog.dir 
hdfs://xx.xxx.xx.xx:9000/user/spark/eventLogging 
Spark event log folder   
spark.executor.memory 512m Spark 
executor memory   
spark.serializer 
org.apache.spark.serializer.KryoSerializer Spark 
serializer   spark.yarn.jars 
hdfs://xx.xxx.xx.xx:9000:/user/spark/spark-jars/*  
 spark.submit.deployMode cluster 
Spark Master URL  yarn.nodemanager.resource.memory-mb 
40960   
yarn.scheduler.minimum-allocation-mb 2048 
  yarn.scheduler.maximum-allocation-mb 
8192 }}|

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  was:
 
|Hi,
 
 when connecting my beeline in hive it is not able to create spark client.
 
 {{select count(*) from student; Query ID = 
hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 
Launching Job 1 out of 1 In order to change the average load for a reducer (in 
bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the 
maximum number of reducers: set hive.exec.reducers.max= In order to set 
a constant number of reducers: set mapreduce.job.reduces= }}
 Failed to execute spark task, with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark 
client.)'
 \{{FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing 
statement: FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1) }}
 
 
 Installed spark prebuilt 2.0 one in standalone cluster mode
 
 My hive-site.xml – placed in spark/conf too and removed the hive jars in hdfs 
path
 
 {{ spark.master yarn 
Spark Master URL   
spark.eventLog.enabled true Spark 
Event Log   spark.eventLog.dir 
hdfs://xx.xxx.xx.xx:9000/user/spark/eventLogging 
Spark event log folder   
spark.executor.memory 512m Spark 
executor memory   
spark.serializer 
org.apache.spark.serializer.KryoSerializer Spark 
serializer   spark.yarn.jars 
hdfs://xx.xxx.xx.xx:9000:/user/spark/spark-jars/*  
 spark.submit.deployMode cluster 
Spark Master URL  yarn.nodemanager.resource.memory-mb 
40960   
yarn.scheduler.minimum-allocation-mb 2048 
  yarn.scheduler.maximum-allocation-mb 
8192 }}|

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


> Apache hive 2.1.1 on Apache Spark 2.0
> -
>
> Key: HIVE-18655
> URL: https://issues.apache.org/jira/browse/HIVE-18655
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, Spark
>Affects Versions: 2.1.1
> Environment: apache hive  -2.1.1
> apache spark - 2.0 - prebulit version (removed hive jars)
> apache hadoop -2.8
>Reporter: AbdulMateen
>Priority: Blocker
>
>  
> |Hi,
>   
>  when connecting my beeline in hive it is not able to create spark client.
>   
>  {{select count(*) from student; Query ID = 
> hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 
> Launching Job 1 out of 1 In order to change the average load for a reducer 
> (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to 
> limit the maximum number of reducers: set hive.exec.reducers.max= In 
> order to set a constant number of reducers: set 
> mapreduce.job.reduces= }}
>  Failed to execute spark task, with exception 
> 'org.apache.hadoop.hive.ql.metadata.HiveException(*Failed to create spark 
> client*.)'
>  { {FAILED: Execution Error, return code 1 from 
> {color:#FF}org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error 
> while p

[jira] [Updated] (HIVE-18655) Apache hive 2.1.1 on Apache Spark 2.0

2018-02-08 Thread AbdulMateen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

AbdulMateen updated HIVE-18655:
---
Description: 
 
|Hi,
 
 when connecting my beeline in hive it is not able to create spark client.
 
 {{select count(*) from student; Query ID = 
hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 
Launching Job 1 out of 1 In order to change the average load for a reducer (in 
bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the 
maximum number of reducers: set hive.exec.reducers.max= In order to set 
a constant number of reducers: set mapreduce.job.reduces= }}
 Failed to execute spark task, with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark 
client.)'
 \{{FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing 
statement: FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1) }}
 
 
 Installed spark prebuilt 2.0 one in standalone cluster mode
 
 My hive-site.xml – placed in spark/conf too and removed the hive jars in hdfs 
path
 
 {{ spark.master yarn 
Spark Master URL   
spark.eventLog.enabled true Spark 
Event Log   spark.eventLog.dir 
hdfs://xx.xxx.xx.xx:9000/user/spark/eventLogging 
Spark event log folder   
spark.executor.memory 512m Spark 
executor memory   
spark.serializer 
org.apache.spark.serializer.KryoSerializer Spark 
serializer   spark.yarn.jars 
hdfs://xx.xxx.xx.xx:9000:/user/spark/spark-jars/*  
 spark.submit.deployMode cluster 
Spark Master URL  yarn.nodemanager.resource.memory-mb 
40960   
yarn.scheduler.minimum-allocation-mb 2048 
  yarn.scheduler.maximum-allocation-mb 
8192 }}|

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  was:
Hi when connecting my beeline in hive it is not able to create spark client

{{select count(*) from student; Query ID = 
hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 
Launching Job 1 out of 1 In order to change the average load for a reducer (in 
bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the 
maximum number of reducers: set hive.exec.reducers.max= In order to set 
a constant number of reducers: set mapreduce.job.reduces=}}

{{FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing 
statement: FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1)}}{{}}
|Hi when connecting my beeline in hive it is not able to create spark client
{{select count(*) from student; Query ID = 
hadoop_20180208184224_f86b5aeb-f27b-4156-bd77-0aab54c0ec67 Total jobs = 1 
Launching Job 1 out of 1 In order to change the average load for a reducer (in 
bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the 
maximum number of reducers: set hive.exec.reducers.max= In order to set 
a constant number of reducers: set mapreduce.job.reduces= }}
Failed to execute spark task, with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark 
client.)'
{{FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask Error: Error while processing 
statement: FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask (state=08S01,code=1) }}
Installed spark prebuilt 2.0 one in standalone cluster mode
My hive-site.xml -- placed in spark/conf too and removed the hive jars in hdfs 
path
{{ spark.master yarn Spark 
Master URL   
spark.eventLog.enabled true Spark 
Event Log   spark.eventLog.dir 
hdfs://xx.xxx.xx.xx:9000/user/spark/eventLogging 
Spark event log folder   
spark.executor.memory 512m Spark 
executor memory   
spark.serializer 
org.apache.spark.serializer.KryoSerializer Spark 
serializer   spark.yarn.jars 
hdfs://xx.xxx.xx.xx:9000:/user/spark/spark-jars/*  
 spark.submit.deployMode cluster 
Spark Master URL  yarn.nodemanager.resource.memory-mb 
40960   
yarn.scheduler.minimum-allocation-mb 2048 
  yarn.scheduler.maximum-allocation-mb 
8192 }}|

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


> Apache hive 2.1.1 on Apache Spark 2.0
> -
>
> Key: HIVE-18655
> URL: https://issues.apache.org/jira/browse/HIVE-18655
> Project: Hive
>  Issue Type: Bug
>  Components: Hive, HiveServer2, Spark
>Affects Versions: 2.1.1
> Environment: apache hive  -2.1.1
> apache spark - 2.0 - prebulit version (removed hive jars)
> apache hadoop -2.8
>Reporter: AbdulMateen
>Priority: Blocker
>
>  
> |Hi,
>  
>  when connecting my beeline in hive it is not able to create spark client.
>  
>  {{select count(