Hi,

Hive2.3 can work on Spark2.x version.
Because Spark2.x api is different from Spark 1.x,so hive2.3 can not work on 
this two versions.


------------------
The harder, more fortunate

 




------------------ Original ------------------
From:  "Mich Talebzadeh";<mich.talebza...@gmail.com>;
Send time: Friday, Dec 29, 2017 6:52 AM
To: "user"<user@hive.apache.org>; 

Subject:  Hive 2.3.2 does not execute on Spark engine anymore





Hi,


My previous Hive 2/0 used to work with Spark 1.3.1 as its execution engine.


I recently upgraded Hive to 2.3.2 and it fails to start with spark as its 
execution engine as follows:


Launching Job 1 out of 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Failed to execute spark task, with exception 
'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark 
client.)'
FAILED: Execution Error, return code 1 from 
org.apache.hadoop.hive.ql.exec.spark.SparkTask. Failed to create spark client.
I have looked around and this seems to be an issue.


As a matter of interest has anyone tested if Hive 2.3.2 runs with any version 
of Spark engone.


P.S. Not interested about TEZ LLAP at this stage.


Thanks












 
Dr Mich Talebzadeh
 
 
 
LinkedIn  
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
 
 
 
http://talebzadehmich.wordpress.com




Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
damage or destruction of data or any other property which may arise from 
relying on this email's technical content is explicitly disclaimed. The author 
will in no case be liable for any monetary damages arising from such loss, 
damage or destruction.

Reply via email to