> 2. If hadoop is configured in multinode cluster(with One machine as namenode
>> > and jobtracker and other machine as slave. Namenode acts as a slave node
>> > also) . How to handle the namenode failovers?.
> 
> There are backup mechanisms that you can use to allow you rebuild the name
> node.  There is no official solution for the high availability problem.
> Most hadoop systems work on batch problems where an hour or two of downtime
> every few years is not a problem.

Actually we were thinking of the product of many mapreduce tasks as needing
high availability.  In other words you can handle down time in creating the
database but not so much in serving it up.  If hbase is the source from
which we build pages then downtime is more of a problem.  If anyone is
thinking about an unofficial solution we¹d be interested.


On 12/20/07 12:05 AM, "Ted Dunning" <[EMAIL PROTECTED]> wrote:

> 
> 
> 
> On 12/19/07 11:17 PM, "M.Shiva" <[EMAIL PROTECTED]> wrote:
> 
> 
>> > 1.Did Separate machines/nodes needed for Namenode ,Jobtracker, Slavenodes
> 
> No.  I run my namenode and job-tracker on one of my storage/worker nodes.
> You can run everything on a single node and still get some interesting
> results because of the discipline imposed by map-reduce programming.
> 
> BUT... Running this stuff on separate nodes is the POINT of hadoop.
> 
>> > 2. If hadoop is configured in multinode cluster(with One machine as
>> namenode
>> > and jobtracker and other machine as slave. Namenode acts as a slave node
>> > also) . How to handle the namenode failovers?.
> 
> There are backup mechanisms that you can use to allow you rebuild the name
> node.  There is no official solution for the high availability problem.
> Most hadoop systems work on batch problems where an hour or two of downtime
> every few years is not a problem.
> 
>> > 3.This question is inter-related with second question. Incase of namenode
>> > failovers , Can slave nodes can be configured and can act as a  namenode >>
for
>> > itself and can take the control of the other slave nodes
> 
> No.  You have to actually take specific action to bring up a new name node.
> This isn't hard, though.
> 
>> > 4.If at all I rebuild the namenode again after a failover. How would old
>> > multinode cluster set up is reproduced again. How to rebuild the same
>> > multinode cluster set up similar to the previous one
> 
> I clearly don't understand this question because the answer seems obvious.
> If build a new namenode and job tracker that have the same configuration as
> the old one, then you have a replica of the old cluster.  What is the
> question?
>  
>> > 5.Can we take backup and restore the files written to hadoop
> 
> Obviously, yes.
> 
> But, again, the point of hadoop's file system is that it makes this largely
> unnecessary because of file replication.
> 
>> > 6.There is no possibility of rewriting the same file in the hadoop (HDFS)
> 
> This isn't a question.  Should it have been?
> 


Reply via email to