To go some ways in answering my own question. This comment from Alejandro is very helpful. https://issues.apache.org/jira/browse/HDFS-2277?focusedCommentId=13089177&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13089177
---------- Forwarded message ---------- From: Ravi Prakash <[email protected]> Date: Thu, Aug 18, 2011 at 4:19 PM Subject: Help running built artifacts To: [email protected] Hi, http://wiki.apache.org/hadoop/HowToContribute is a great resource detailing the steps needed to build jars and tars from the source code. However, I am still not sure what the best way to run hadoop servers (NN, SNN, DNs, JT, TTs) using those built jars is. Could we all please reach consensus that for an efficient dev cycle, we should be able start hadoop servers from built source code easily? What are the ways people currently do this? Is there a script to transfer built artifacts into a single directory which I can then label HADOOP_PREFIX and then run from there? Whatever the best method, I feel should be included in the HowToContribute twiki. Its not really effective testing if I only ran test-patch and unit tests after making changes without running an actual single-node cluster. Thanks Ravi.
