[ https://issues.apache.org/jira/browse/TOREE-425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16090867#comment-16090867 ]
ASF GitHub Bot commented on TOREE-425: -------------------------------------- Github user rdblue commented on a diff in the pull request: https://github.com/apache/incubator-toree/pull/128#discussion_r127858615 --- Diff: kernel/src/main/scala/org/apache/toree/kernel/api/Kernel.scala --- @@ -414,13 +417,15 @@ class Kernel ( Await.result(sessionFuture, Duration(100, TimeUnit.MILLISECONDS)) } catch { case timeout: TimeoutException => - // getting the session is taking a long time, so assume that Spark - // is starting and print a message - display.content( - MIMEType.PlainText, "Waiting for a Spark session to start...") + // in cluster mode, the sparkContext is forced to initialize + if (SparkUtils.isSparkClusterMode(defaultSparkConf) == false) { --- End diff -- Instead of preventing the message from being sent, I think this should update the logic so that this uses the second case. That case just creates a new context without the futures or Await calls. > sparkContext lazy initiation causes some issues when Toree is running on Yarn > Cluster mode > ------------------------------------------------------------------------------------------ > > Key: TOREE-425 > URL: https://issues.apache.org/jira/browse/TOREE-425 > Project: TOREE > Issue Type: Bug > Components: Kernel > Affects Versions: 0.2.0 > Reporter: Luciano Resende > Assignee: Luciano Resende > Priority: Critical > Fix For: 0.2.0 > > > Kernels running in yarn-cluster mode (when launched via spark-submit) must > initialize a SparkContext in order for the Spark Yarn code to register the > application as RUNNING: > https://github.com/apache/spark/blob/3d4d11a80fe8953d48d8bfac2ce112e37d38dc90/resource-managers/yarn/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala#L405 -- This message was sent by Atlassian JIRA (v6.4.14#64029)