Hi. I am trying to use Apache Spark in a Restful web service in which I am trying to query the data from Hive tables using Apache Spark Sql. This is my java class
SparkConf sparkConf = new SparkConf().setAppName("Hive").setMaster("local").setSparkHome("Path"); JavaSparkContext ctx = new JavaSparkContext(sparkConf); HiveContext sqlContext = new org.apache.spark.sql.hive.HiveContext(ctx.sc()); sqlContext.sql("CREATE TABLE if not exists page (VisitedDate STRING,Platform STRING,VisitCount int) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' LINES TERMINATED BY '\n'"); sqlContext.sql("LOAD DATA INPATH '/page' INTO TABLE page"); Row[] result = sqlContext.sql("Select * from tablename where Platform='Apache'").collect(); ctx.close(); When I deploy this service it throws invalid path as it looks for the file in local file system. But when I worked in the scala shell it works fine. when analyzed I found that it is not taking hive configuration file in spark home and I have also set spark home in my code but it doesn't work out. And also I found that metastore_db folder is created inside glass fish server which is used in this service. Can any one tell me how to solve this issue -- View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Invalid-HDFS-path-exception-tp23875.html Sent from the Apache Spark User List mailing list archive at Nabble.com. --------------------------------------------------------------------- To unsubscribe, e-mail: user-unsubscr...@spark.apache.org For additional commands, e-mail: user-h...@spark.apache.org