On my cluster I use the patch in 
https://issues.apache.org/jira/browse/HADOOP-435
to build a single jar file and zip my configuration into that jar file.

Installation just a matter of copying the one jar file to all the machines in 
my cluster.

To startup I have a script that runs through all the machines in the cluster 
and runs the start.sh script. The start.sh script assumes the jobtracker and 
namenode run on the same machine. If this is not the case, you need to tweak 
the script a bit. (It's a rather trivial 20 line script.)

The patch will never go in since the issue was closed, but I still find it 
useful for situations where I don't want to do a lot of tarring and 
configuring to get things setup. (I'm probably just lazy since it doesn't 
seem to bother most people :)

ben

On Tuesday 15 January 2008 06:41:04 Miles Osborne wrote:
> I have been through this very recently.  My approach was to:
>
> --manually setup the master (ie specify the conf files etc)
> --tar-up java and hadoop s.t unpacking them puts them in the desired
> location
> --create the ssh keys on the master.
>
> now, create a shell script which does the following:
>
> --open the necessary ports
> --copy across the ssh keys from the master and install them in the correct
> location
> --copy across and untar java and hadoop
> --assign the correct permissions to the distributed file system directory
> on the current node
> --create user accounts as necessary
>
> copy this script across to each slave in turn and run it;  adding a new
> slave node will take a minute or two.
>
> (this assumes each node already has linux installed on it and the
> filesystem is identical)
>
> Miles
>
> On 15/01/2008, Bin YANG <[EMAIL PROTECTED]> wrote:
> > Dear colleagues,
> >
> > Right now, I have to deploy ubuntu 7.10 + hadoop 0.15 on 16 PCs.
> > One PC will be set as master, the others will be set as slaves.
> > The PCs have similar hardware, or even the same hardware.
> >
> > Is there a quick and easy way to deploy hadoop on these PCs?
> >
> > Do you think that
> >
> > 1. ghost a whole successful ubuntu 7.10 + hadoop 0.15 hard disk
> > 2. and then copy the image to other PCs
> >
> > is the best way?
> >
> > Thank you very much.
> >
> > Best wishes,
> > Bin YANG
> >
> >
> > --
> > Bin YANG
> > Department of Computer Science and Engineering
> > Fudan University
> > Shanghai, P. R. China
> > EMail: [EMAIL PROTECTED]


Reply via email to