On Wed, Apr 21, 2010 at 6:07 PM, JAGANADH G <[email protected]> wrote:

> Dear all
> Is it possible to run haddop in non-cluster or non cloud environment.
>

If you don't have enough system to evaluate hadoop.  You have two option
still you can practice with Hadoop/MapReduce.

1). Standalone Operation : By default, Hadoop is configured to run in a
non-distributed mode, as a single Java process.

http://hadoop.apache.org/common/docs/current/quickstart.html

2). OpenSolaris Live Hadoop :

http://www.mail-archive.com/[email protected]/msg14908.html


>
> Suppose I have five Linux system connected through lan . With hadoop is it
> possible to run a distributed job(Map reduce) in the five system.
>
> I am a beginner in hadoop. If anything is not clear forgive
>
>
HDFS has a master/slave architecture. An HDFS cluster consists of a single
NameNode, a master server. There are a number of DataNodes, usually one per
node in the cluster.

The MapReduce framework consists of a single master "JobTracker" and one
slave "TaskTracker" per cluster-node.

If you have 5 box(Say box1, box2, box3, box4, box5). what you have to do is,
first install and configure HDFS NameNode(master) in box1 and install and
configure HDFS DataNodes(slave) on all other box(box2,box3,box4,box5).
then install and configure "JobTracker" in box1 and "TaskTracker" on all
other box(box2,box3,box4,box5).

1). box1 act as both NameNode and JobTracker

2). box2,box3,box4,box5 are acts as DataNodes as well as TaskTracker .



Resource :

1). http://developer.yahoo.com/hadoop/tutorial/

2). http://hadoop.apache.org/

The above two link has very good information about Hadoop/MapReduce. Please
take some study about HDFS and MapReduce.

In case of any further technical help, please feel free to replay.

Thanks & Rg
Mohan L
_______________________________________________
ILUGC Mailing List:
http://www.ae.iitm.ac.in/mailman/listinfo/ilugc

Reply via email to