Github user aarondav commented on a diff in the pull request:

    https://github.com/apache/spark/pull/110#discussion_r10442282
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
    @@ -1204,7 +1204,7 @@ object SparkContext extends Logging {
         master match {
           case "local" =>
             val scheduler = new TaskSchedulerImpl(sc, MAX_LOCAL_TASK_FAILURES, 
isLocal = true)
    -        val backend = new LocalBackend(scheduler, 1)
    +        val backend = new LocalBackend(scheduler)
    --- End diff --
    
    I think we could simplify the logic by simply putting the default behavior 
here. Something like
    ```
    // Use all cores available, up to user-specified limit
    val realCores = Runtime.getRuntime.availableProcessors()
    val numCores = math.min(realCores, conf.getInt("spark.cores.max", 
realCores))
    val backend = new LocalBackend(scheduler, numCores)
    ```
    
    This would allow us to avoid changing much in LocalBackend.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to