Github user zjffdu commented on the issue:

    https://github.com/apache/zeppelin/pull/2329
  
    @jongyoul I could not run your PR, seems they only work in your enviroment. 
There are several issue, one of them is  
    * HADOOP_CONF_DIR is not in zeppelin server classpath, so it could not 
connect with my yarn cluster. It works in your enviroment because you use 
default yarn configuration I believe.
     
    Besides that I am still have concern about this approach:
    1. You launch Remote Interpreter Process as yarn AM, and then create Spark 
Interpreter through thrift in yarn-cluster mode. Seems you don't use 
spark-submit, that means SparkSubmit.scala is never called, and this would 
cause potential problems.
    2. The other thing I worry about is that now you launch spark app in a 
remote machine (Yarn AM). But that remote machine may not have the spark 
configuration (like spark-defualts.conf, hive-site.xml). And even it seems not 
possible to run multiple versions of spark in this approach. 
    
     


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to