Hi Everyone, I am using Apache Spark for 2 weeks and as of now I am querying hive tables using spark java api. And it is working fine in Hadoop single mode but when I tried the same code in Hadoop multi cluster it throws "org.apache.spark.SparkException: Detected yarn-cluster mode, but isn't running on a cluster. Deployment to YARN is not supported directly by SparkContext. Please use spark-submit" This is my java code what I tried in Single node cluster
SparkConf sparkConf = new SparkConf().setAppName("Hive").setMaster("local").setSparkHome("path"); JavaSparkContext ctx = new JavaSparkContext(sparkConf); HiveContext sqlContext = new HiveContext(ctx.sc()); org.apache.spark.sql.Row[] result = sqlContext.sql("Select * from tablename").collect(); But In multi node cluster I have changed local to yarn-cluster . can anyone help me in this?