@Crystal
You can use spark on yarn. Yarn have fair scheduler,modified yarn-site.xml.
发自我的 iPad
> 在 2014年8月11日,6:49,Matei Zaharia 写道:
>
> Hi Crystal,
>
> The fair scheduler is only for jobs running concurrently within the same
> SparkContext (i.e. within an application), not for separate applications on
> the standalone cluster manager. It has no effect there. To run more of those
> concurrently, you need to set a cap on how many cores they each grab with
> spark.cores.max.
>
> Matei
>
> On August 10, 2014 at 12:13:08 PM, 李宜芳 (xuite...@gmail.com) wrote:
>
> Hi
>
> I am trying to switch from FIFO to FAIR with standalone mode.
>
> my environment:
> hadoop 1.2.1
> spark 0.8.0 using stanalone mode
>
> and i modified the code..
>
> ClusterScheduler.scala -> System.getProperty("spark.scheduler.mode",
> "FAIR"))
> SchedulerBuilder.scala ->
> val DEFAULT_SCHEDULING_MODE = SchedulingMode.FAIR
>
> LocalScheduler.scala ->
> System.getProperty("spark.scheduler.mode", "FAIR)
>
> spark-env.sh ->
> export SPARK_JAVA_OPTS="-Dspark.scheduler.mode=FAIR"
> export SPARK_JAVA_OPTS=" -Dspark.scheduler.mode=FAIR" ./run-example
> org.apache.spark.examples.SparkPi spark://streaming1:7077
>
>
> but it's not work
> i want to switch from fifo to fair
> how can i do??
>
> Regards
> Crystal Lee
>
-
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org