Github user jacek-lewandowski commented on a diff in the pull request:

    https://github.com/apache/spark/pull/3571#discussion_r22926549
  
    --- Diff: 
core/src/main/scala/org/apache/spark/deploy/worker/DriverRunner.scala ---
    @@ -66,6 +66,22 @@ private[spark] class DriverRunner(
         def sleep(seconds: Int): Unit = (0 until seconds).takeWhile(f => 
{Thread.sleep(1000); !killed})
       }
     
    +  // Grab all the SSL settings from the worker configuration
    +  private val workerSecurityProps =
    --- End diff --
    
    This made me sad because I realised that I didn't considered all the cases. 
    
    I think there should be two cases supported:
    1) The user provides a configuration which works for all the nodes, that 
is, for client node and for worker nodes especially when the client node == 
worker node because we have the same files, paths and so on. Then, the SSL 
configuration passed in SparkConf can be used to start an executor and then by 
this executor. 
    2) The user wants to use server SSL configuration by executors and by 
driver if running in cluster mode. Why? For example, because the location of 
key stores is different in client machine and the worker machine - suppose you 
have AWS cluster and you want to submit jobs from Windows machine.
    
    Given that, I'll add one more option to SSL configuration - 
`spark.ssl.useNodeLocalConfig`, which if set, will force the executors and the 
driver to use inherited SSL configuration from the worker. 



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to