tysonnorris commented on a change in pull request #2531: Share bookkeeping data 
across controllers
URL: 
https://github.com/apache/incubator-openwhisk/pull/2531#discussion_r139473470
 
 

 ##########
 File path: 
core/controller/src/main/scala/whisk/core/loadBalancer/SharedDataService.scala
 ##########
 @@ -33,35 +32,14 @@ case class RemoveCounter(key: String)
 case object GetMap
 
 /**
- * Companion object to specify actor properties from the outside, e.g. name of 
the shared map
+ * Companion object to specify actor properties from the outside, e.g. name of 
the shared map and cluster seed nodes
  */
 object SharedDataService {
-  val requiredProperties = Map(WhiskConfig.controllerSeedNodes -> null)
-
-  def props(storageName: String): Props = Props(new 
SharedDataService(storageName))
-
-  /**
-   * Add seed nodes if cluster provider is specified, otherwhise return the 
existing config.
-   * Parse akka seed nodes this way until either of these 2 issues is resolved:
-   * https://github.com/akka/akka/issues/23600
-   * https://github.com/typesafehub/config/issues/69
-   * @return Updated Config
-   */
-  def addAkkaSeedNodesToConf(whiskConf: WhiskConfig): Config = {
-    val conf = ConfigFactory.load()
-
-    val cluster = conf.getString("akka.actor.provider")
-
-    if (cluster == "cluster") {
-      val seedNodes = whiskConf.controllerSeedNodes
-      val nodes = seedNodes.split(' ').map(x => 
s""""akka.tcp://controller-actor-system@$x"""")
-      val configWithSeedNodes = 
ConfigFactory.parseString(s"akka.cluster.seed-nodes=[${nodes.mkString(",")}]")
-      configWithSeedNodes.withFallback(conf)
-    } else conf
-  }
+  def props(storageName: String, seedNodes: Seq[Address]): Props =
+    Props(new SharedDataService(storageName, seedNodes))
 }
 
-class SharedDataService(storageName: String) extends Actor with ActorLogging {
+class SharedDataService(storageName: String, seedNodes: Seq[Address]) extends 
Actor with ActorLogging {
 
 Review comment:
   @vvraskin @markusthoemmes one example is downing nodes - since enabling the 
clustering is done in SharedDataService, will it also handle downing cluster 
nodes? Are the semantics for that explicitly available, or are you assuming 
that auto-downing is enabled, or that default behavior is required to be used 
in all cases? For example, in the same way that seednodes needs to be specified 
in different ways to support different deployments, downing behavior should 
also be configurable, and its not clear how that will happen, except for 
`Cluster(system).down()` at some unrelated location - while this is technically 
possible, it seems leaky to spread this around. WDYT?
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to