Hi, All:

I'm trying to read and write from the hdfs cluster using SparkSQL hive context. 
My current build of spark is 1.5.2.  The problem is that currently our company 
has very old version of hdfs (hadoop 2.1.0) and hive metastore (0.11) using 
Hortonworks bundle.

One of the possible solution is to establish a separate cluster running hadoop 
2.6.0 and hive >0.12. But is it still possible I can read data from old hdfs 
cluster and write to new cluster ? Or the only solution is to upgrade my 
Hortonworks bundle ? Will that impact current Hive tables ?

Here is the versions on the hdfs cluster:
Hue 2.2.0:67
HDP 2.0.5
Hadoop 2.1.0
HCatalog 0.11.0
Pig 0.11.2
Hive 0.11.0
Oozie 4.0.0


Thanks a lot,

Jade




Reply via email to