Hi Sujee,

Thanks a lot for your interest on HA.

for #1
If you can invest on NFS filers, it is another option.  If you want to try
this, you can use released Hadoop-2 version and try.
  but above #2 and #3 will avoid this external hardware dependency.

for #2 you can take a look at HDFS-3399
  We are testing with BookeKeeper from last 2/3 months and going well. BK
is progressing on autorecovery and security parts. Almost auto recoverry
done(BOOKKEEPER-237) and will be released in BK 4.2 version very soon. BK
already started work on security part as well. Also this integration part
will come out with next hadoop-2 release as well. Also attached tested
scenarios in HDFS-3399 for your reference if you want to take a look.
Also there is one subTask in that umbrella  JIRA for user manual
information.


for #3 you can take a look at HDFS-3077
   In this umbrella JIRA work is going on actively.


for #4
I am not sure any one working on it.

The advantage here is, you can plugin the shared storage whichever you want.

Regards,
Uma

On Wed, Sep 5, 2012 at 4:07 AM, Sujee Maniyam <su...@sujee.net> wrote:

> Hello devs,
>
> I am trying to understand the current state / direction of  namenode
> HA implementation.
>
> For using shared directory, I see the following options
> (from
> http://www.cloudera.com/blog/2012/03/high-availability-for-the-hadoop-distributed-file-system-hdfs/
>   and  https://issues.apache.org/jira/browse/HDFS-3278)
>
> 1) rely on external HA filer
> 2) multiple edit directories
> 3) book keeper
> 4) keep edits in HDFS / quorum based
>
> is there going to be an 'official / supported' method, or it is going
> to be a configurable choice when setting up a cluster?
>
> thanks
> Sujee
> http://sujee.net
>

Reply via email to