[ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12721774#action_12721774 ]
Tom White commented on HADOOP-5640: ----------------------------------- Overall +1. A few comments: * The plugin design is actually the listener pattern. For API extensibility it might be worth considering a ServiceEvent class that is passed to the methods of ServicePlugin and subclasses so that extra context information can be passed without breaking plugins. Also, ServicePlugin should be an abstract class. * Rename PluginDispatcher to ServicePluginDispatcher. There are other plugins (e.g. MemoryCalculatorPlugin) so it's worth being more precise. * SingleArgumentRunnable might be named ServicePluginCallback to better indicate its use. BTW the latest patch doesn't apply; however it will need regenerating anyway after the project split. > Allow ServicePlugins to hook callbacks into key service events > -------------------------------------------------------------- > > Key: HADOOP-5640 > URL: https://issues.apache.org/jira/browse/HADOOP-5640 > Project: Hadoop Core > Issue Type: Improvement > Components: util > Reporter: Todd Lipcon > Assignee: Todd Lipcon > Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt > > > HADOOP-5257 added the ability for NameNode and DataNode to start and stop > ServicePlugin implementations at NN/DN start/stop. However, this is > insufficient integration for some common use cases. > We should add some functionality for Plugins to subscribe to events generated > by the service they're plugging into. Some potential hook points are: > NameNode: > - new datanode registered > - datanode has died > - exception caught > - etc? > DataNode: > - startup > - initial registration with NN complete (this is important for HADOOP-4707 > to sync up datanode.dnRegistration.name with the NN-side registration) > - namenode reconnect > - some block transfer hooks? > - exception caught > I see two potential routes for implementation: > 1) We make an enum for the types of hookpoints and have a general function in > the ServicePlugin interface. Something like: > {code:java} > enum HookPoint { > DN_STARTUP, > DN_RECEIVED_NEW_BLOCK, > DN_CAUGHT_EXCEPTION, > ... > } > void runHook(HookPoint hp, Object value); > {code} > 2) We make classes specific to each "pluggable" as was originally suggested > in HADDOP-5257. Something like: > {code:java} > class DataNodePlugin { > void datanodeStarted() {} > void receivedNewBlock(block info, etc) {} > void caughtException(Exception e) {} > ... > } > {code} > I personally prefer option (2) since we can ensure plugin API compatibility > at compile-time, and we avoid an ugly switch statement in a runHook() > function. > Interested to hear what people's thoughts are here. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.