Facing issue in building Fuse-DFS

2011-11-30 Thread Stuti Awasthi
Hi All, I am using Hadoop 0.20.2 and tried to build Fuse DFS. After facing lots of issues and resolving them I am stuck at the following error. Can please somebody help me in resolving this . Command : ant compile-contrib -Dlibhdfs=1 -Dfusedfs=1 Error: [exec] make[1]: Entering directory `/home

Re: Best option for mounting HDFS

2011-11-30 Thread Alexander C.H. Lorenz
Hi, I wrote up a small article about, that works in some installations I managed. http://mapredit.blogspot.com/2011/11/nfs-exported-hdfs-cdh3.html I would suggest to use NFS4, if available in your distro. On Wed, Nov 30, 2011 at 6:10 AM, Stuti Awasthi wrote: > Hey Joey, > Thanks for update :).

RE: Facing issue in building Fuse-DFS

2011-11-30 Thread Stuti Awasthi
Hi, After some googling I found this link : http://search-hadoop.com/m/Ee9Vj1ZNSGR1&subj=Re+Hadoop+Hdfs+trunk+Commit+Build+560+Failure I tried to apply patch but getting errors in applying the patch. Here is what I did : cd $HADOOP_HOME patch -p1 < ~/Downloads/hdfs-780-4.patch But got the errors

RE: Facing issue in building Fuse-DFS

2011-11-30 Thread Stuti Awasthi
Hi, I also tried "hdfs-1757-1.patch" but unsuccessful to apply it correctly. cd $HADOOP_HOME patch -p1 < ~/Downloads/ hdfs-1757-1.patch Got errors :( -Original Message- From: Stuti Awasthi Sent: Wednesday, November 30, 2011 3:48 PM To: hdfs-user@hadoop.apache.org Subject: RE: Facing issu

Re: Generation Stamp

2011-11-30 Thread Zhanwei Wang
Hi, everyone Following the discussing, I would like to know if the DataNode report a overage block to Namenode, according to Uma, NameNode can reject it, what the DataNode will do then? Ask other datanode copy a new replica to it and delete the old one? Or NameNode will arrange the work if the

RE: Generation Stamp

2011-11-30 Thread Uma Maheswara Rao G
>From: Zhanwei Wang [had...@wangzw.org] >Sent: Wednesday, November 30, 2011 4:34 PM >To: hdfs-user@hadoop.apache.org >Subject: Re: Generation Stamp >Hi, everyone >Following the discussing, I would like to know if the DataNode report a >overage block to Namenode

Load balancing HDFS

2011-11-30 Thread Lior Schachter
> > Hi all, > We currently have a 10 nodes cluster with 6TB per machine. > We are buying few more nodes and considering to have only 3TB per machine. > > By default HDFS assigns blocks according to used capacity, percentage wise. > This means that old nodes will contain more data. > We prefer that

RE: Load balancing HDFS

2011-11-30 Thread Uma Maheswara Rao G
Default blockplacement policy will check the remaining space like following. If the remaining space in that node is greater than blksize*MIN_BLKS_FOR_WRITE (default 5) , then it will treat that node as good. I think the option may be is to run the balancer to move the blocks based on DN utili

Re: Load balancing HDFS

2011-11-30 Thread Lior Schachter
Thanks Uma. So when HDFS writes data the it distributes the blocks only according to the percentage usage (and not actual utilization)? I think that running balancer between every job is overkill. I prefer to format the existing nodes and give them 3TB. Lior On Wed, Nov 30, 2011 at 3:02 PM, Um

problem of large distributed system access hdfs

2011-11-30 Thread Zhanwei Wang
Hi everyone I have a problem when I want to enable our distributed system to access hdfs. The background: In our system, we have 4 ~ 6 segment instance on one physical node, and each segment forks a new process to deal with a new session. So if a client connect to our system, we will hav

Re: problem of large distributed system access hdfs

2011-11-30 Thread Alexander C.H. Lorenz
Hi, you can limit the heap-size for libhdfs .. export LIBHDFS_OPTS="-Xmx128m" best, Alex On Wed, Nov 30, 2011 at 3:25 PM, Zhanwei Wang wrote: > Hi everyone > > ** ** > > I have a problem when I want to enable our distributed system to access > hdfs. > > ** ** > > The background:

Re: problem of large distributed system access hdfs

2011-11-30 Thread Joey Echeverria
You could check out Hoop[1], a REST interface for accessing HDFS. Since it's REST based, you can easily load balance clients across multiple servers. You'll have to write the C/C++ code for communicating with Hoop, but that shouldn't require too much more than a thin wrapper around an HTTP client l

RE: Load balancing HDFS

2011-11-30 Thread Uma Maheswara Rao G
> >From: Lior Schachter [lior...@gmail.com] >Sent: Wednesday, November 30, 2011 7:04 PM >To: hdfs-user@hadoop.apache.org >Subject: Re: Load balancing HDFS >Thanks Uma. >So when HDFS writes data the it distributes the blocks only according to the >percentage usag

Re: Symbolic Links in HDFS

2011-11-30 Thread Daryn Sharp
Hi Stuti, Unfortunately I do not know the answer to that question. I'd suggest contacting the author of the patches. Daryn On Nov 29, 2011, at 11:13 PM, Stuti Awasthi wrote: > Hi Daryn, > Thanks. I also wanted to know that is the patch " symlink41-hdfs" is the > final patch which I can appl

Re: Symbolic Links in HDFS

2011-11-30 Thread Todd Lipcon
Hi Stuti, In general, if you want to start applying custom patches to Hadoop to make your own build, you're on your own. If you're not comfortable verifying the patches and digging through SVN history to see how final they are, then you probably should just wait for a released version to include t