I see no possibility how federation may help to have different Clusters on _same_ machines. On top, federation isn’t production ready, since the NN can have massively issues with GC on high loaded systems, which will be the case here. To have multiple, maybe single node, clusters the best way is to use cloud based solutions, e.g.. OpenStack with Docker containers. Also an mesos driven solution can help here, there are some good tutorials available.
BG, Alexander > On 26 Jan 2015, at 10:34, Azuryy Yu <[email protected]> wrote: > > Hi, > > I think the best way is deploy HDFS federation with Hadoop 2.x. > > On Mon, Jan 26, 2015 at 5:18 PM, Harun Reşit Zafer > <[email protected] <mailto:[email protected]>> wrote: > Hi everyone, > > We have set up and been playing with Hadoop 1.2.x and its friends (Hbase, > pig, hive etc.) on 7 physical servers. We want to test Hadoop (maybe > different versions) and ecosystem on physical machines (virtualization is not > an option) from different perspectives. > > As a bunch of developer we would like to work in parallel. We want every team > member play with his/her own cluster. However we have limited amount of > servers (strong machines though). > > So the question is, by changing port numbers, environment variables and other > configuration parameters, is it possible to setup several independent > clusters on same physical machines. Is there any constraints? What are the > possible difficulties we are to face? > > Thanks in advance > > -- > Harun Reşit Zafer > TÜBİTAK BİLGEM BTE > Bulut Bilişim ve Büyük Veri Analiz Sistemleri Bölümü > T +90 262 675 3268 <tel:%2B90%20262%20675%203268> > W http://www.hrzafer.com <http://www.hrzafer.com/> > >
