[ 
https://issues.apache.org/jira/browse/HADOOP-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12683670#action_12683670
 ] 

Steve Loughran commented on HADOOP-5257:
----------------------------------------

1. Looking at the code, the plugin interface is very like the service lifecycle 
in HADOOP-3628; start, stop, and such like. There's no reason why we couldn't 
use the same lifecycle interface for plugins as the services themselves; they 
just push their children through the same lifecycle. The ping() operation could 
be used to aggregate the state of the children; you could even model the http 
and ipc servers similarly. I've been contemplating doing this for MiniMRCluster.

2. when you aggregate, you soon reach the policy questions: how to handle 
failure on startup. This patch just warns and continues. I may not want my name 
nodes to start up if they cant start a critical plugin; having it drop to the 
failed state with a stack trace that management tools can see. At the very 
least, the warn/fail choice should be an option for the various nodes; having 
plugin-specific policy is feature creep.

3. Also: configuration. How do these plugins get configured. Are they expected 
to sneak a look at their parent's configuration, or should the parent pass down 
a specific configuration for them?

4. Finally if the plugin interface extends Closeable(), it would make it 
possible for us to have utility code to handle sets of closeable items for 
cleanup, usable in lots of other places too

> Export namenode/datanode functionality through a pluggable RPC layer
> --------------------------------------------------------------------
>
>                 Key: HADOOP-5257
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5257
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>            Reporter: Carlos Valiente
>            Priority: Minor
>         Attachments: HADOOP-5257-v2.patch, HADOOP-5257-v3.patch, 
> HADOOP-5257-v4.patch, HADOOP-5257-v5.patch, HADOOP-5257-v6.patch, 
> HADOOP-5257-v7.patch, HADOOP-5257.patch
>
>
> Adding support for pluggable components would allow exporting DFS 
> functionallity using arbitrary protocols, like Thirft or Protocol Buffers. 
> I'm opening this issue on Dhruba's suggestion in HADOOP-4707.
> Plug-in implementations would extend this base class:
> {code}abstract class Plugin {
>     public abstract datanodeStarted(DataNode datanode);
>     public abstract datanodeStopping();
>     public abstract namenodeStarted(NameNode namenode);
>     public abstract namenodeStopping();
> }{code}
> Name node instances would then start the plug-ins according to a 
> configuration object, and would also shut them down when the node goes down:
> {code}public class NameNode {
>     // [..]
>     private void initialize(Configuration conf)
>         // [...]
>         for (Plugin p: PluginManager.loadPlugins(conf))
>           p.namenodeStarted(this);
>     }
>     // [..]
>     public void stop() {
>         if (stopRequested)
>             return;
>         stopRequested = true;
>         for (Plugin p: plugins) 
>             p.namenodeStopping();
>         // [..]
>     }
>     // [..]
> }{code}
> Data nodes would do a similar thing in {{DataNode.startDatanode()}} and 
> {{DataNode.shutdown}}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to