GitHub user olegz opened a pull request:

    https://github.com/apache/spark/pull/2849

    Initial commit to provide pluggable strategy to facilitate access to nat...

    Initial commit to provide pluggable strategy to facilitate access to native 
Hadoop resources
    
    Added HadoopExecutionContext trait and its default implementation 
DefaultHadoopExecutionContext
    Modified SparkContext to instantiate and delegate to the instance of 
HadoopExecutionContext where appropriate
    
    Changed HadoopExecutionContext to JobExecutionContext
    Changed DefaultHadoopExecutionContext to DefaultExecutionContext
    Name changes are due to the fact that when Spark executes outside of Hadoop 
having Hadoop in the name woudl be confusing
    Added initial documentation and tests
    
    polished scaladoc
    
    annotated JobExecutionContext with @DeveloperAPI
    
    eliminated TaskScheduler null checks in favor of NoOpTaskScheduler
    to be used in cases where execution of Spark DAG is delegated to an 
external execution environment
    
    added execution-context check to SparkSubmit
    
    Added recognition of execution-context to SparkContext
    updated spark-class script to recognize when 'execution-context:' is used
    
    polished merge
    
    changed annotations from @DeveloperApi to @Experimental as part of the PR 
suggestion
    
    externalized persist and unpersist operations
    
    added classpath hooks to spark-class

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/olegz/spark-1 SH-1

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/2849.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #2849
    
----
commit 84556c86f95500f89bb57f2bcc6c35f025799dc5
Author: Oleg Zhurakousky <[email protected]>
Date:   2014-09-16T15:26:48Z

    Initial commit to provide pluggable strategy to facilitate access to native 
Hadoop resources
    Added HadoopExecutionContext trait and its default implementation 
DefaultHadoopExecutionContext
    Modified SparkContext to instantiate and delegate to the instance of 
HadoopExecutionContext where appropriate
    
    Changed HadoopExecutionContext to JobExecutionContext
    Changed DefaultHadoopExecutionContext to DefaultExecutionContext
    Name changes are due to the fact that when Spark executes outside of Hadoop 
having Hadoop in the name woudl be confusing
    Added initial documentation and tests
    
    polished scaladoc
    
    annotated JobExecutionContext with @DeveloperAPI
    
    eliminated TaskScheduler null checks in favor of NoOpTaskScheduler
    to be used in cases where execution of Spark DAG is delegated to an 
external execution environment
    
    added execution-context check to SparkSubmit
    
    Added recognition of execution-context to SparkContext
    updated spark-class script to recognize when 'execution-context:' is used
    
    polished merge
    
    changed annotations from @DeveloperApi to @Experimental as part of the PR 
suggestion
    
    externalized persist and unpersist operations
    
    added classpath hooks to spark-class

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to