[
https://issues.apache.org/jira/browse/HADOOP-5257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12688282#action_12688282
]
Steve Loughran commented on HADOOP-5257:
----------------------------------------
@Druba. The stuff could co-exist, I'm just thinking about how to do it most
cleanly
# Making all these things Closeable() would be handy, as there is utility stuff
common to lots of things that could be used (lists of closeable components).
some existing helper classes used by the various nodes could also be made
closeable, for a single way to shut things down.
# I'm unsure about the failure handling here when there is a chain of plugged
in things. Today failures on shutdown are logged and ignored, startup is
trickier to handle, and helper classes that get started by a node may not get
shut down cleanly if something goes wrong in the startup chain after it. (this
is something I think I handle in HADOOP-3628)
# We could use the ping operation in HADOOP-3628, tease it out to its own
interface, and then any of these plugins that implement the health check would
be pulled in to the node's health check.
# If the plugins took an implementation of Service in their constructor, then
the plugins would have the ability to call Configured.getConf() to get the
configuration, and any other bits of the standard API. I'm not sure they would
need to, not having a plugin.
# We could extend the MockService to do some plugin testing, start them with
the lifecycle, roll them back etc.
> Export namenode/datanode functionality through a pluggable RPC layer
> --------------------------------------------------------------------
>
> Key: HADOOP-5257
> URL: https://issues.apache.org/jira/browse/HADOOP-5257
> Project: Hadoop Core
> Issue Type: New Feature
> Components: dfs
> Reporter: Carlos Valiente
> Priority: Minor
> Attachments: HADOOP-5257-v2.patch, HADOOP-5257-v3.patch,
> HADOOP-5257-v4.patch, HADOOP-5257-v5.patch, HADOOP-5257-v6.patch,
> HADOOP-5257-v7.patch, HADOOP-5257.patch
>
>
> Adding support for pluggable components would allow exporting DFS
> functionallity using arbitrary protocols, like Thirft or Protocol Buffers.
> I'm opening this issue on Dhruba's suggestion in HADOOP-4707.
> Plug-in implementations would extend this base class:
> {code}abstract class Plugin {
> public abstract datanodeStarted(DataNode datanode);
> public abstract datanodeStopping();
> public abstract namenodeStarted(NameNode namenode);
> public abstract namenodeStopping();
> }{code}
> Name node instances would then start the plug-ins according to a
> configuration object, and would also shut them down when the node goes down:
> {code}public class NameNode {
> // [..]
> private void initialize(Configuration conf)
> // [...]
> for (Plugin p: PluginManager.loadPlugins(conf))
> p.namenodeStarted(this);
> }
> // [..]
> public void stop() {
> if (stopRequested)
> return;
> stopRequested = true;
> for (Plugin p: plugins)
> p.namenodeStopping();
> // [..]
> }
> // [..]
> }{code}
> Data nodes would do a similar thing in {{DataNode.startDatanode()}} and
> {{DataNode.shutdown}}
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.