Hi All,
I am using Hadoop 0.20.2 and tried to build Fuse DFS. After facing lots of
issues and resolving them I am stuck at the following error.
Can please somebody help me in resolving this .
Command : ant compile-contrib -Dlibhdfs=1 -Dfusedfs=1
Error:
[exec] make[1]: Entering directory
`/home
Hi,
I wrote up a small article about, that works in some installations I
managed.
http://mapredit.blogspot.com/2011/11/nfs-exported-hdfs-cdh3.html
I would suggest to use NFS4, if available in your distro.
On Wed, Nov 30, 2011 at 6:10 AM, Stuti Awasthi wrote:
> Hey Joey,
> Thanks for update :).
Hi,
After some googling I found this link :
http://search-hadoop.com/m/Ee9Vj1ZNSGR1&subj=Re+Hadoop+Hdfs+trunk+Commit+Build+560+Failure
I tried to apply patch but getting errors in applying the patch.
Here is what I did :
cd $HADOOP_HOME
patch -p1 < ~/Downloads/hdfs-780-4.patch
But got the errors
Hi,
I also tried "hdfs-1757-1.patch" but unsuccessful to apply it correctly.
cd $HADOOP_HOME
patch -p1 < ~/Downloads/ hdfs-1757-1.patch
Got errors :(
-Original Message-
From: Stuti Awasthi
Sent: Wednesday, November 30, 2011 3:48 PM
To: hdfs-user@hadoop.apache.org
Subject: RE: Facing issu
Hi, everyone
Following the discussing, I would like to know if the DataNode report a
overage block to Namenode, according to Uma, NameNode can reject it, what
the DataNode will do then? Ask other datanode copy a new replica to it and
delete the old one? Or NameNode will arrange the work if the
>From: Zhanwei Wang [had...@wangzw.org]
>Sent: Wednesday, November 30, 2011 4:34 PM
>To: hdfs-user@hadoop.apache.org
>Subject: Re: Generation Stamp
>Hi, everyone
>Following the discussing, I would like to know if the DataNode report a
>overage block to Namenode
>
> Hi all,
> We currently have a 10 nodes cluster with 6TB per machine.
> We are buying few more nodes and considering to have only 3TB per machine.
>
> By default HDFS assigns blocks according to used capacity, percentage wise.
> This means that old nodes will contain more data.
> We prefer that
Default blockplacement policy will check the remaining space like following.
If the remaining space in that node is greater than blksize*MIN_BLKS_FOR_WRITE
(default 5) , then it will treat that node as good.
I think the option may be is to run the balancer to move the blocks based on DN
utili
Thanks Uma.
So when HDFS writes data the it distributes the blocks only according to
the percentage usage (and not actual utilization)?
I think that running balancer between every job is overkill. I prefer to
format the existing nodes and give them 3TB.
Lior
On Wed, Nov 30, 2011 at 3:02 PM, Um
Hi everyone
I have a problem when I want to enable our distributed system to access hdfs.
The background:
In our system, we have 4 ~ 6 segment instance on one physical node, and each
segment forks a new process to deal with a new session. So if a client connect
to our system, we will hav
Hi,
you can limit the heap-size for libhdfs ..
export LIBHDFS_OPTS="-Xmx128m"
best,
Alex
On Wed, Nov 30, 2011 at 3:25 PM, Zhanwei Wang wrote:
> Hi everyone
>
> ** **
>
> I have a problem when I want to enable our distributed system to access
> hdfs.
>
> ** **
>
> The background:
You could check out Hoop[1], a REST interface for accessing HDFS.
Since it's REST based, you can easily load balance clients across
multiple servers. You'll have to write the C/C++ code for
communicating with Hoop, but that shouldn't require too much more than
a thin wrapper around an HTTP client l
>
>From: Lior Schachter [lior...@gmail.com]
>Sent: Wednesday, November 30, 2011 7:04 PM
>To: hdfs-user@hadoop.apache.org
>Subject: Re: Load balancing HDFS
>Thanks Uma.
>So when HDFS writes data the it distributes the blocks only according to the
>percentage usag
Hi Stuti,
Unfortunately I do not know the answer to that question. I'd suggest
contacting the author of the patches.
Daryn
On Nov 29, 2011, at 11:13 PM, Stuti Awasthi wrote:
> Hi Daryn,
> Thanks. I also wanted to know that is the patch " symlink41-hdfs" is the
> final patch which I can appl
Hi Stuti,
In general, if you want to start applying custom patches to Hadoop to
make your own build, you're on your own. If you're not comfortable
verifying the patches and digging through SVN history to see how final
they are, then you probably should just wait for a released version to
include t
15 matches
Mail list logo