[ 
https://issues.apache.org/jira/browse/HADOOP-7518?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13080096#comment-13080096
 ] 

Todd Lipcon commented on HADOOP-7518:
-------------------------------------

Is there a way to run parts of the cluster without having to make and untar a 
tar? It used to be I could edit the source, recompile in 20 seconds or so, and 
then run "hdfs namenode" from right there. Having to make a whole tar will 
increase this cycle up to several minutes

> Unable to start HDFS cluster on trunk (after the common mavenisation)
> ---------------------------------------------------------------------
>
>                 Key: HADOOP-7518
>                 URL: https://issues.apache.org/jira/browse/HADOOP-7518
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: scripts
>            Reporter: Vinod Kumar Vavilapalli
>         Attachments: Patch
>
>
> This is what I do:
> In common directory:
> mvn package -Pbintar -Dmaven.test.skip.exec=true 
> cp target/hadoop-common-0.23.0-SNAPSHOT-bin.tar.gz ~/tmp/common/
> In hdfs directory:
> ant veryclean binary -Dresolvers=internal
> cp build/hadoop-hdfs-0.23.0-SNAPSHOT-bin.tar.gz ~/tmp/hdfs/
> Untar the tarballs, and start namenode as follows:
> {quote}
> export 
> HADOOP_COMMON_HOME=~vinodkv/tmp/common/hadoop-common-0.23.0-SNAPSHOT-bin;
> export HADOOP_CONF_DIR=~vinodkv/tmp/conf;
> export HADOOP_HDFS_HOME=~vinodkv/tmp/hdfs/hadoop-hdfs-0.23.0-SNAPSHOT;
> $HADOOP_COMMON_HOME/sbin/hadoop-daemon.sh start namenode
> {quote}
> The last one simply says "Hadoop common not found." and exits.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to