[ https://issues.apache.org/jira/browse/HADOOP-5640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12716142#action_12716142 ]
Hadoop QA commented on HADOOP-5640: ----------------------------------- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12408851/hadoop-5640.v3.txt against trunk revision 781602. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 2 new or modified tests. +1 javadoc. The javadoc tool did not generate any warning messages. +1 javac. The applied patch does not increase the total number of javac compiler warnings. +1 findbugs. The patch does not introduce any new Findbugs warnings. +1 Eclipse classpath. The patch retains Eclipse classpath integrity. +1 release audit. The applied patch does not increase the total number of release audit warnings. +1 core tests. The patch passed core unit tests. -1 contrib tests. The patch failed contrib unit tests. Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/458/testReport/ Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/458/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/458/artifact/trunk/build/test/checkstyle-errors.html Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/458/console This message is automatically generated. > Allow ServicePlugins to hook callbacks into key service events > -------------------------------------------------------------- > > Key: HADOOP-5640 > URL: https://issues.apache.org/jira/browse/HADOOP-5640 > Project: Hadoop Core > Issue Type: Improvement > Components: util > Reporter: Todd Lipcon > Assignee: Todd Lipcon > Attachments: hadoop-5640.txt, HADOOP-5640.v2.txt, hadoop-5640.v3.txt > > > HADOOP-5257 added the ability for NameNode and DataNode to start and stop > ServicePlugin implementations at NN/DN start/stop. However, this is > insufficient integration for some common use cases. > We should add some functionality for Plugins to subscribe to events generated > by the service they're plugging into. Some potential hook points are: > NameNode: > - new datanode registered > - datanode has died > - exception caught > - etc? > DataNode: > - startup > - initial registration with NN complete (this is important for HADOOP-4707 > to sync up datanode.dnRegistration.name with the NN-side registration) > - namenode reconnect > - some block transfer hooks? > - exception caught > I see two potential routes for implementation: > 1) We make an enum for the types of hookpoints and have a general function in > the ServicePlugin interface. Something like: > {code:java} > enum HookPoint { > DN_STARTUP, > DN_RECEIVED_NEW_BLOCK, > DN_CAUGHT_EXCEPTION, > ... > } > void runHook(HookPoint hp, Object value); > {code} > 2) We make classes specific to each "pluggable" as was originally suggested > in HADDOP-5257. Something like: > {code:java} > class DataNodePlugin { > void datanodeStarted() {} > void receivedNewBlock(block info, etc) {} > void caughtException(Exception e) {} > ... > } > {code} > I personally prefer option (2) since we can ensure plugin API compatibility > at compile-time, and we avoid an ugly switch statement in a runHook() > function. > Interested to hear what people's thoughts are here. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.