I cannot say for the current code, since it look like people had undone the
project split. But based on Eli`s repo I could, with the code of June 12th,
do the follow to run HDFS in pseudo distributed mode

-Set  your JAVA_HOME on hadoop-common/conf/hadoop-env.shto match your
system.
-Applied
diff --git a/bin/hadoop b/bin/hadoop
index 88c49eb..84992e5 100755
--- a/bin/hadoop
+++ b/bin/hadoop
@@ -21,7 +21,7 @@ bin=`which $0`
 bin=`dirname ${bin}`
 bin=`cd "$bin"; pwd`

-. "$bin"/../libexec/hadoop-config.sh
+. "$bin"/hadoop-config.sh

 function print_usage(){
   echo "Usage: hadoop [--config confdir] COMMAND"


- export some environment variables:
export HADOOP_COMMON_HOME= <hadoop-common`s repo path>
export HADOOP_HDFS_HOME= <hadoop-hdfs`s repo path>

-Configured core-site.xml and hdfs-site.xml as usual


Then $HADOOP_HDFS_HOME/bin/hdfs namenode should work , as well the command
for datanodes.


Personally, I would expect to run from source, because sometimes is good to
test running the actual thing.


Regards,
André Oriani

On Thu, Jun 16, 2011 at 21:15, Kirk True <k...@mustardgrain.com> wrote:

> Should running ./bin/hdfs from the source root work?
>
> I get these errors:
>
>    [kirk@bubbas apache]$ ./bin/hdfs namenode
>    ./bin/hdfs: line 154: cygpath: command not found
>    ./bin/hdfs: line 177: exec: : not found
>
> I can hack around the latter by setting the JAVA env var up first:
>
>    export JAVA=$JAVA_HOME/bin/java
>
> Still, I get this:
>
>    [kirk@bubbas apache]$ ./bin/hdfs namenode
>    ./bin/hdfs: line 154: cygpath: command not found
>    Exception in thread "main" java.lang.**NoClassDefFoundError:
> org/apache/hadoop/hdfs/server/**namenode/NameNode
>    Caused by: java.lang.**ClassNotFoundException:
> org.apache.hadoop.hdfs.server.**namenode.NameNode
>        at java.net.URLClassLoader$1.run(**URLClassLoader.java:202)
>        at java.security.**AccessController.doPrivileged(**Native Method)
>        at java.net.URLClassLoader.**findClass(URLClassLoader.java:**190)
>        at java.lang.ClassLoader.**loadClass(ClassLoader.java:**307)
>        at sun.misc.Launcher$**AppClassLoader.loadClass(**
> Launcher.java:301)
>        at java.lang.ClassLoader.**loadClass(ClassLoader.java:**248)
>    Could not find the main class: 
> org.apache.hadoop.hdfs.server.**namenode.NameNode.
>  Program will exit.
>
> This is after running a build (`ant compile`) on the source.
>
> I can dig into it, but I'm wondering if running from source/stand-alone
> like this is even expected to work.
>
> -- Kirk
>
>
> On 06/07/2011 01:40 PM, André Oriani wrote:
>
>> Thanks a lot Eli, it really helped a lot.  I think I got the general idea
>> of
>> the scripts.
>>
>> Thanks,
>> André
>>
>> On Tue, Jun 7, 2011 at 16:51, Eli Collins<e...@cloudera.com>  wrote:
>>
>>  Hey Andre,
>>>
>>> You can run an hdfs build out of common/hdfs trees checked out from
>>> svn or git. Here are some scripts that make this easier:
>>>
>>> https://github.com/elicollins/**hadoop-dev<https://github.com/elicollins/hadoop-dev>
>>>
>>> Thanks,
>>> Eli
>>>
>>> On Tue, Jun 7, 2011 at 11:56 AM, André Oriani
>>> <ra078...@students.ic.unicamp.**br <ra078...@students.ic.unicamp.br>>
>>>  wrote:
>>>
>>>> Hi,
>>>>
>>>>
>>>> I have clone the repo for hadoop-common and hadoop-hdfs and built it
>>>>
>>> using
>>>
>>>> "ant mvn-install" . Now I would like to be able run HDFS in
>>>> pseudo-distributed mode  to test some modifications of mine. One year
>>>> ago
>>>>
>>> I
>>>
>>>> could do it but now  I had no luck. The scripts are failing, complaining
>>>> about not found files and stuff.
>>>>
>>>> Has anyone succeed recently in doing something similar to what I am
>>>> intending to do . Or do I need to generate a tarball and install it ?
>>>>
>>>>
>>>> Thanks and Regards,
>>>> André
>>>>
>>>>

Reply via email to