Mohammad, The HA feature is very much functional, if you've tried it. I would not say it is "not production ready", for I see it in use at several places already, including my humbly small MBP (two local NNs, just for the fun of it) :)
On Sat, Aug 11, 2012 at 12:46 AM, Mohammad Tariq <[email protected]> wrote: > Very correctly said by Anil. Actually Hadoop HA is not yet production > ready and you are about to begin your Hadoop journey, so just thought > of not mentioning it. If you want to use HA, just pull it from the > trunk and do a build. > > Regards, > Mohammad Tariq > > > On Sat, Aug 11, 2012 at 12:42 AM, anil gupta <[email protected]> wrote: >> Hi Aji, >> >> Adding onto whatever Mohammad Tariq said, If you use Hadoop 2.0.0-Alpha then >> Namenode is not a single point of failure.However, Hadoop 2.0.0 is not of >> production quality yet(its in Alpha). >> Namenode use to be a Single Point of Failure in releases prior to Hadoop >> 2.0.0. >> >> HTH, >> Anil Gupta >> >> >> On Fri, Aug 10, 2012 at 11:55 AM, Ted Dunning <[email protected]> wrote: >>> >>> Hadoop's file system was (mostly) copied from the concepts of Google's old >>> file system. >>> >>> The original paper is probably the best way to learn about that. >>> >>> http://research.google.com/archive/gfs.html >>> >>> >>> >>> On Fri, Aug 10, 2012 at 11:38 AM, Aji Janis <[email protected]> wrote: >>>> >>>> I am very new to Hadoop. I am considering setting up a Hadoop cluster >>>> consisting of 5 nodes where each node has 3 internal hard drives. I >>>> understand HDFS has a configurable redundancy feature but what happens if >>>> an >>>> entire drive crashes (physically) for whatever reason? How does Hadoop >>>> recover, if it can, from this situation? What else should I know before >>>> setting up my cluster this way? Thanks in advance. >>>> >>>> >>> >> >> >> >> -- >> Thanks & Regards, >> Anil Gupta -- Harsh J
