GitHub user ruseel opened a pull request:

    https://github.com/apache/zeppelin/pull/2558

    SparkInterpreter with 

    ### What is this PR for?
    executing these two pagragraph in spark interperter 
    ----
    sc.setLocalProperty("a", "1")
    ----
    sc.getLocalProerty("a") 
    ----
    
    evaluated to unexpected result "null". fix that confusion.
    
    ---
    
    sc.setLocalProperty(...) should be more deterministic.
    
    and honor design of SparkContext. SparkContext.setLocalProperty(...) is 
using ThreadLocal. So let me say that design of SparkContext has assumtion 
single-thread user.
    
    But before this commit, SparkInterpreter's ExecutorService is created with 
Executor.newSchedulerService(100). So User might perceive situation like 
sc.setLocalProperty(...) is not working.
    
    fix by using singleThread for Schduler.
    
    
    
    ### What type of PR is it?
    Bug Fix
    
    ### Todos
    
    ### What is the Jira issue?
    
    ### How should this be tested?
    make note with two paragraph 
    
    ----
    sc.setLocalProperty("a", "1")
    ----
    sc.getLocalProerty("a") 
    ----
    
    run paragraph separatly 
    
    ### Screenshots (if appropriate)
    
    ### Questions:


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/ruseel/zeppelin spark-sc-in-one-thread

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/zeppelin/pull/2558.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2558
    
----
commit 1564afd5ff5316f31fe9dda9a788dd5216fc22b6
Author: stephen <step...@vcnc.co.kr>
Date:   2017-09-01T08:42:46Z

    use self-created thread for schduler loop
    
    Scheduler of zeppelin has run-forever loop. Before this commit
    schduler is running on executor created from ExecutorFactory.
    
    So there is a coupling that "ExecutorFactory should create thread-pool
    grater than 1". Seems to be a bad practice.

commit 980a1754fa5d2632c204686619d6e40874a961dd
Author: stephen <step...@vcnc.co.kr>
Date:   2017-09-01T08:51:05Z

    sc.setLocalProperty(...) should be more deterministic
    
    and honor design of SparkContext. SparkContext.setLocalProperty(...)
    is using ThreadLocal. So let me say that design of SparkContext has
    assumtion single-thread user.
    
    But before this commit, SparkInterpreter's ExecutorService is created
    with Executor.newSchedulerService(100). So User might perceive situation
    like sc.setLocalProperty(...) is not working.
    
    fix by using singleThread for Schduler.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

Reply via email to