GitHub user harishreedharan opened a pull request:

    https://github.com/apache/spark/pull/1506

    SPARK-2582. Make Block Manager Master pluggable.

    This patch makes the BlockManagerMaster a trait and makes the current 
BlockManagerMaster one of
    the possible implementations and renames it to 
StandaloneBlockManagerMaster. An additional (as yet undocumented)
    configuration parameter is added which can be used to set the 
BlockManagerMaster type to use. At some
    point, when we add BlockManagerMasters which write metadata to HDFS or 
replicate, we can add other possible
    values which will use other implementations.
    
    There is no change in current behavior. We must also enforce other 
implementations to use the current Akka actor
    itself, so the code in the BlockManager does not need to care what 
implementation is used on the BMM side. I am not sure
    how to enforce this. This is not too much of a concern as we don't have to 
make it pluggable - so the only options would
    be part of Spark - so this should be fairly easy to enforce.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/harishreedharan/spark pluggable-BMM

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/1506.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #1506
    
----
commit 840b3cec3383fb8a1943c863f4db313d694f8922
Author: Hari Shreedharan <[email protected]>
Date:   2014-07-21T05:30:59Z

    SPARK-2582. Make Block Manager Master pluggable.
    
    This patch makes the BlockManagerMaster a trait and makes the current 
BlockManagerMaster one of
    the possible implementations and renames it to 
StandaloneBlockManagerMaster. An additional (as yet undocumented)
    configuration parameter is added which can be used to set the 
BlockManagerMaster type to use. At some
    point, when we add BlockManagerMasters which write metadata to HDFS or 
replicate, we can add other possible
    values which will use other implementations.
    
    There is no change in current behavior. We must also enforce other 
implementations to use the current Akka actor
    itself, so the code in the BlockManager does not need to care what 
implementation is used on the BMM side. I am not sure
    how to enforce this. This is not too much of a concern as we don't have to 
make it pluggable - so the only options would
    be part of Spark - so this should be fairly easy to enforce.

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

Reply via email to