Here is my code. It works on local. setMaster("local[*]").
But it does not work for my remote spark cluster. I checked all logs. I did
not find any error.
It shows the following warning. Could you please help? Thank you very very
much!

14:45:47.956 [Timer-0] WARN org.apache.spark.scheduler.TaskSchedulerImpl -
Initial job has not accepted any resources; check your cluster UI to ensure
that workers are registered and have sufficient resources.




object SparkPi {
  val sparkConf = new SparkConf()
    .setAppName("Spark Pi")
    .setMaster("spark://10.100.103.25:7077")
    //.setMaster("local[*]")

  val sc = new SparkContext(sparkConf)

  def main(args: Array[String]) {
    val slices =2
    val n = math.min(10000L * slices, Int.MaxValue).toInt // avoid overflow
    val count = sc.parallelize(1 until n, slices).map { i =>
        val x = random * 2 - 1
        val y = random * 2 - 1
        if (x*x + y*y <= 1) 1 else 0
      }.reduce(_ + _)
    val pi = 4.0 * count / (n - 1)
    logger.warn(s"Pi is roughly $pi")
  }
}

-- 
This message is intended exclusively for the individual or entity to which 
it is addressed. This communication may contain information that is 
proprietary, privileged or confidential or otherwise legally prohibited 
from disclosure. If you are not the named addressee, you are not authorized 
to read, print, retain, copy or disseminate this message or any part of it. 
If you have received this message in error, please notify the sender 
immediately by e-mail and delete all copies of the message.

Reply via email to