I want to use spark cluster through a scala function. So I can integrate spark 
into my program directly.
For example:
  When I call count function in my own program, my program will deploy the 
function to the cluster , so I can get the result directly
  def count()=
{
    val master = "spark://mache123:7077"
  val appName = "control_test"
   val sc = new SparkContext(master, appName)
   val rdd =  sc.textFile("hdfs://123d101suse11sp3:9000/netflix/netflix.test")
val count = rdd.count
   System.out.println("rdd.count = " + count)
count

}

Reply via email to