Are you running from eclipse ?
If so add the *Hadoop_conf_dir* path to the classpath

And then you can access your hdfs directory as below 

object sparkExample {
  def main(args: Array[String]){ 
    val logname = "///user/hduser/input/sample.txt"
    val conf = new
SparkConf().setAppName("SimpleApp").setMaster("local[2]").set("spark.executor.memory",
"1g")
    val sc = new SparkContext(conf)
    val logData = sc.textFile(logname, 2)
    val numAs = logData.filter(line => line.contains("hadoop")).count()
    val numBs = logData.filter(line => line.contains("spark")).count()
    println("Lines with Hadoop : %s, Lines with Spark: %s".format(numAs,
numBs))
  }
}




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Unable-to-Access-files-in-Hadoop-HA-enabled-from-using-Spark-tp26768p26771.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to