Allow ServicePlugins to hook callbacks into key service events
--------------------------------------------------------------

                 Key: HADOOP-5640
                 URL: https://issues.apache.org/jira/browse/HADOOP-5640
             Project: Hadoop Core
          Issue Type: Improvement
          Components: util
            Reporter: Todd Lipcon


HADOOP-5257 added the ability for NameNode and DataNode to start and stop 
ServicePlugin implementations at NN/DN start/stop. However, this is 
insufficient integration for some common use cases.

We should add some functionality for Plugins to subscribe to events generated 
by the service they're plugging into. Some potential hook points are:

NameNode:
  - new datanode registered
  - datanode has died
  - exception caught
  - etc?

DataNode:
  - startup
  - initial registration with NN complete (this is important for HADOOP-4707 to 
sync up datanode.dnRegistration.name with the NN-side registration)
  - namenode reconnect
  - some block transfer hooks?
  - exception caught

I see two potential routes for implementation:

1) We make an enum for the types of hookpoints and have a general function in 
the ServicePlugin interface. Something like:

{code:java}
enum HookPoint {
  DN_STARTUP,
  DN_RECEIVED_NEW_BLOCK,
  DN_CAUGHT_EXCEPTION,
 ...
}

void runHook(HookPoint hp, Object value);
{code}

2) We make classes specific to each "pluggable" as was originally suggested in 
HADDOP-5257. Something like:

{code:java}
class DataNodePlugin {
  void datanodeStarted() {}
  void receivedNewBlock(block info, etc) {}
  void caughtException(Exception e) {}
  ...
}
{code}

I personally prefer option (2) since we can ensure plugin API compatibility at 
compile-time, and we avoid an ugly switch statement in a runHook() function.

Interested to hear what people's thoughts are here.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to