Thanks, Harsh, that helped.
From: Harsh J [ha...@cloudera.com]
Sent: Thursday, February 02, 2012 7:52 PM
To: common-dev@hadoop.apache.org
Subject: Re: Getting started with Eclipse for Hadoop 1.0.0?
Hi Tim,
Could you try using the branch-1 from the
Is the issue of the value of ${HADOOP_NN_DIR}
Yes.
You need to set it to your namenode directory. By default it's
file:///tmp/hadoop-${user.name}/dfs/name. Environment variable won't
work here. So you need to explicitly set a path name.
On 02/03/2012 07:16 PM, Hai Huang wrote:
It is
Thanks Harsh.
I figured out the issue. Added following environment variables :
==
export HADOOP_DEV_HOME=`pwd`
export HADOOP_MAPRED_HOME=${HADOOP_DEV_HOME}
export HADOOP_COMMON_HOME=${HADOOP_DEV_HOME}
export HADOOP_HDFS_HOME=${HADOOP_DEV_HOME}
export
Debugging Hadoop daemons in Eclipse / Netbeans debugger
---
Key: HADOOP-8024
URL: https://issues.apache.org/jira/browse/HADOOP-8024
Project: Hadoop Common
Issue Type: Improvement
I am doing following steps to running a example -- randomwriter
1. sbin/hadoop-daemon.sh start namenode
2. sbin/hadoop-daemon.sh start datanode
3.bin/yarn-daemon.sh start resourcemanager
4.bin/yarn-daemon.sh start nodemanager
5. ./bin/hadoop jar
Essentially, there are two ways to achieve the functionality you need.
1. By forking your MR job from your java application as Evans suggested.
Look at java APIs for Runtime.exec or ProcessBuilder.
2. In your java application, you write the complete main method of your
test.jar - which will